Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.
On-orbit calibration for star sensors without priori information.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang
2017-07-24
The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.
Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm
NASA Astrophysics Data System (ADS)
Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi
2017-11-01
In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Orbit/attitude estimation with LANDSAT Landmark data
NASA Technical Reports Server (NTRS)
Hall, D. L.; Waligora, S.
1979-01-01
The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
NASA Astrophysics Data System (ADS)
Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe
2017-11-01
Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.
The unsaturated flow in porous media with dynamic capillary pressure
NASA Astrophysics Data System (ADS)
Milišić, Josipa-Pina
2018-05-01
In this paper we consider a degenerate pseudoparabolic equation for the wetting saturation of an unsaturated two-phase flow in porous media with dynamic capillary pressure-saturation relationship where the relaxation parameter depends on the saturation. Following the approach given in [13] the existence of a weak solution is proved using Galerkin approximation and regularization techniques. A priori estimates needed for passing to the limit when the regularization parameter goes to zero are obtained by using appropriate test-functions, motivated by the fact that considered PDE allows a natural generalization of the classical Kullback entropy. Finally, a special care was given in obtaining an estimate of the mixed-derivative term by combining the information from the capillary pressure with the obtained a priori estimates on the saturation.
Noniterative estimation of a nonlinear parameter
NASA Technical Reports Server (NTRS)
Bergstroem, A.
1973-01-01
An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Global identifiability of linear compartmental models--a computer algebra algorithm.
Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C
1998-01-01
A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
The Estimation of Precisions in the Planning of Uas Photogrammetric Surveys
NASA Astrophysics Data System (ADS)
Passoni, D.; Federici, B.; Ferrando, I.; Gagliolo, S.; Sguerso, D.
2018-05-01
The Unmanned Aerial System (UAS) is widely used in the photogrammetric surveys both of structures and of small areas. Geomatics focuses the attention on the metric quality of the final products of the survey, creating several 3D modelling applications from UAS images. As widely known, the quality of results derives from the quality of images acquisition phase, which needs an a priori estimation of the expected precisions. The planning phase is typically managed using dedicated tools, adapted from the traditional aerial-photogrammetric flight plan. But UAS flight has features completely different from the traditional one. Hence, the use of UAS for photogrammetric applications today requires a growth in knowledge in planning. The basic idea of this research is to provide a drone photogrammetric flight planning tools considering the required metric precisions, given a priori the classical parameters of a photogrammetric planning: flight altitude, overlaps and geometric parameters of the camera. The created "office suite" allows a realistic planning of a photogrammetric survey, starting from an approximate knowledge of the Digital Surface Model (DSM), and the effective attitude parameters, changing along the route. The planning products are the overlapping of the images, the Ground Sample Distance (GSD) and the precision on each pixel taking into account the real geometry. The different tested procedures, the obtained results and the solution proposed for the a priori estimates of the precisions in the particular case of UAS surveys are here reported.
NASA Astrophysics Data System (ADS)
Fisher, Daniel; Poulsen, Caroline A.; Thomas, Gareth E.; Muller, Jan-Peter
2016-03-01
In this paper we evaluate the impact on the cloud parameter retrievals of the ORAC (Optimal Retrieval of Aerosol and Cloud) algorithm following the inclusion of stereo-derived cloud top heights as a priori information. This is performed in a mathematically rigorous way using the ORAC optimal estimation retrieval framework, which includes the facility to use such independent a priori information. Key to the use of a priori information is a characterisation of their associated uncertainty. This paper demonstrates the improvements that are possible using this approach and also considers their impact on the microphysical cloud parameters retrieved. The Along-Track Scanning Radiometer (AATSR) instrument has two views and three thermal channels, so it is well placed to demonstrate the synergy of the two techniques. The stereo retrieval is able to improve the accuracy of the retrieved cloud top height when compared to collocated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), particularly in the presence of boundary layer inversions and high clouds. The impact of the stereo a priori information on the microphysical cloud properties of cloud optical thickness (COT) and effective radius (RE) was evaluated and generally found to be very small for single-layer clouds conditions over open water (mean RE differences of 2.2 (±5.9) microns and mean COD differences of 0.5 (±1.8) for single-layer ice clouds over open water at elevations of above 9 km, which are most strongly affected by the inclusion of the a priori).
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
Predicting thermal history a-priori for magnetic nanoparticle hyperthermia of internal carcinoma
NASA Astrophysics Data System (ADS)
Dhar, Purbarun; Sirisha Maganti, Lakshmi
2017-08-01
This article proposes a simplistic and realistic method where a direct analytical expression can be derived for the temperature field within a tumour during magnetic nanoparticle hyperthermia. The approximated analytical expression for thermal history within the tumour is derived based on the lumped capacitance approach and considers all therapy protocols and parameters. The present method is simplistic and provides an easy framework for estimating hyperthermia protocol parameters promptly. The model has been validated with respect to several experimental reports on animal models such as mice/rabbit/hamster and human clinical trials. It has been observed that the model is able to accurately estimate the thermal history within the carcinoma during the hyperthermia therapy. The present approach may find implications in a-priori estimation of the thermal history in internal tumours for optimizing magnetic hyperthermia treatment protocols with respect to the ablation time, tumour size, magnetic drug concentration, field strength, field frequency, nanoparticle material and size, tumour location, and so on.
NASA Astrophysics Data System (ADS)
Alipour, M. H.; Kibler, Kelly M.
2018-02-01
A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.
Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia
NASA Astrophysics Data System (ADS)
Mather, B.; Moresi, L. N.; Rayner, P. J.
2017-12-01
The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.
A computational model for biosonar echoes from foliage
Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631
A computational model for biosonar echoes from foliage.
Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.
Multiparameter elastic full waveform inversion with facies-based constraints
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing
2018-06-01
Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.
Symbolic Regression for the Estimation of Transfer Functions of Hydrological Models
NASA Astrophysics Data System (ADS)
Klotz, D.; Herrnegger, M.; Schulz, K.
2017-11-01
Current concepts for parameter regionalization of spatially distributed rainfall-runoff models rely on the a priori definition of transfer functions that globally map land surface characteristics (such as soil texture, land use, and digital elevation) into the model parameter space. However, these transfer functions are often chosen ad hoc or derived from small-scale experiments. This study proposes and tests an approach for inferring the structure and parametrization of possible transfer functions from runoff data to potentially circumvent these difficulties. The concept uses context-free grammars to generate possible proposition for transfer functions. The resulting structure can then be parametrized with classical optimization techniques. Several virtual experiments are performed to examine the potential for an appropriate estimation of transfer function, all of them using a very simple conceptual rainfall-runoff model with data from the Austrian Mur catchment. The results suggest that a priori defined transfer functions are in general well identifiable by the method. However, the deduction process might be inhibited, e.g., by noise in the runoff observation data, often leading to transfer function estimates of lower structural complexity.
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
2016-01-01
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasra, Ajay; Law, Kody J. H.; Zhou, Yan
Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
Explicit error bounds for the α-quasi-periodic Helmholtz problem.
Lord, Natacha H; Mulholland, Anthony J
2013-10-01
This paper considers a finite element approach to modeling electromagnetic waves in a periodic diffraction grating. In particular, an a priori error estimate associated with the α-quasi-periodic transformation is derived. This involves the solution of the associated Helmholtz problem being written as a product of e(iαx) and an unknown function called the α-quasi-periodic solution. To begin with, the well-posedness of the continuous problem is examined using a variational formulation. The problem is then discretized, and a rigorous a priori error estimate, which guarantees the uniqueness of this approximate solution, is derived. In previous studies, the continuity of the Dirichlet-to-Neumann map has simply been assumed and the dependency of the regularity constant on the system parameters, such as the wavenumber, has not been shown. To address this deficiency, in this paper an explicit dependence on the wavenumber and the degree of the polynomial basis in the a priori error estimate is obtained. Since the finite element method is well known for dealing with any geometries, comparison of numerical results obtained using the α-quasi-periodic transformation with a lattice sum technique is then presented.
NASA Astrophysics Data System (ADS)
Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.
2014-02-01
A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Methods of Constructing a Blended Performance Function Suitable for Formation Flight
NASA Technical Reports Server (NTRS)
Ryan, John J.
2017-01-01
This paper presents two methods for constructing an approximate performance function of a desired parameter using correlated parameters. The methods are useful when real-time measurements of a desired performance function are not available to applications such as extremum-seeking control systems. The first method approximates an a priori measured or estimated desired performance function by combining real-time measurements of readily available correlated parameters. The parameters are combined using a weighting vector determined from a minimum-squares optimization to form a blended performance function. The blended performance function better matches the desired performance function mini- mum than single-measurement performance functions. The second method expands upon the first by replacing the a priori data with near-real-time measurements of the desired performance function. The resulting blended performance function weighting vector is up- dated when measurements of the desired performance function are available. Both methods are applied to data collected during formation- flight-for-drag-reduction flight experiments.
1998-01-01
Parabolic Boundary Control Problem ARIELA BRIANI AND MAURIZIO FALCONE Dipartimento di Matematica Universitä di Pisa Dipartimento di Matematica ...Ministry for University and Scientific Research (MURST Project "Analisi Numerica e Matematica Computazionale"). 50 A Priori Estimates for the...Briani Dipartimento di Matematica Universitä di Pisa Via Buonarroti 2 1-56126 Pisa e-mail:briani@dm.unipi.it Maurizio Falcone Dipartimento di
Joint inversion of regional and teleseismic earthquake waveforms
NASA Astrophysics Data System (ADS)
Baker, Mark R.; Doser, Diane I.
1988-03-01
A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.
Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-12
Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Precise regional baseline estimation using a priori orbital information
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey
1990-01-01
A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.
A Hydrological Modeling Framework for Flood Risk Assessment for Japan
NASA Astrophysics Data System (ADS)
Ashouri, H.; Chinnayakanahalli, K.; Chowdhary, H.; Sen Gupta, A.
2016-12-01
Flooding has been the most frequent natural disaster that claims lives and imposes significant economic losses to human societies worldwide. Japan, with an annual rainfall of up to approximately 4000 mm is extremely vulnerable to flooding. The focus of this research is to develop a macroscale hydrologic model for simulating flooding toward an improved understanding and assessment of flood risk across Japan. The framework employs a conceptual hydrological model, known as the Probability Distributed Model (PDM), as well as the Muskingum-Cunge flood routing procedure for simulating streamflow. In addition, a Temperature-Index model is incorporated to account for snowmelt and its contribution to streamflow. For an efficient calibration of the model, in terms of computational timing and convergence of the parameters, a set of A Priori parameters is obtained based on the relationships between the model parameters and the physical properties of watersheds. In this regard, we have implemented a particle tracking algorithm and a statistical model which use high resolution Digital Terrain Models to estimate different time related parameters of the model such as time to peak of the unit hydrograph. In addition, global soil moisture and depth data are used to generate A Priori estimation of maximum soil moisture capacity, an important parameter of the PDM model. Once the model is calibrated, its performance is examined during the Typhoon Nabi which struck Japan in September 2005 and caused severe flooding throughout the country. The model is also validated for the extreme precipitation event in 2012 which affected Kyushu. In both cases, quantitative measures show that simulated streamflow depicts good agreement with gauge-based observations. The model is employed to simulate thousands of possible flood events for the entire Japan which makes a basis for a comprehensive flood risk assessment and loss estimation for the flood insurance industry.
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Development and system identification of a light unmanned aircraft for flying qualities research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, M.E.; Andrisani, D. II
This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less
Uncertainty in temperature-based determination of time of death
NASA Astrophysics Data System (ADS)
Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan
2018-03-01
Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.
Systematic effects in LOD from SLR observations
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst
2014-09-01
Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.
Downdating a time-varying square root information filter
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.
1990-01-01
A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
NASA Technical Reports Server (NTRS)
Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.;
2016-01-01
We used one-way laser ranging data from International Laser Ranging Service (ILRS) ground stations to NASA's Lunar Reconnaissance Orbiter (LRO) for a demonstration of orbit determination. In the one-way setup, the state of LRO and the parameters of the spacecraft and all involved ground station clocks must be estimated simultaneously. This setup introduces many correlated parameters that are resolved by using a priori constraints. More over the observation data coverage and errors accumulating from the dynamical and the clock modeling limit the maximum arc length. The objective of this paper is to investigate the effect of the arc length, the dynamical and modeling accuracy and the observation data coverage on the accuracy of the results. We analyzed multiple arcs using lengths of 2 and 7 days during a one-week period in Science Mission phase 02 (SM02,November2010) and compared the trajectories, the post-fit measurement residuals and the estimated clock parameters. We further incorporated simultaneous passes from multiple stations within the observation data to investigate the expected improvement in positioning. The estimated trajectories were compared to the nominal LRO trajectory and the clock parameters (offset, rate and aging) to the results found in the literature. Arcs estimated with one-way ranging data had differences of 5-30 m compared to the nominal LRO trajectory. While the estimated LRO clock rates agreed closely with the a priori constraints, the aging parameters absorbed clock modeling errors with increasing clock arc length. Because of high correlations between the different ground station clocks and due to limited clock modeling accuracy, their differences only agreed at the order of magnitude with the literature. We found that the incorporation of simultaneous passes requires improved modeling in particular to enable the expected improvement in positioning. We found that gaps in the observation data coverage over 12h (approximately equals 6 successive LRO orbits) prevented the successful estimation of arcs with lengths shorter or longer than 2 or 7 days with our given modeling.
Multispectrum retrieval techniques applied to Venus deep atmosphere and surface problems
NASA Astrophysics Data System (ADS)
Kappel, David; Arnold, Gabriele; Haus, Rainer
The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express is continuously collecting nightside emission data (among others) from Venus. A radiative transfer model of Venus' atmosphere in conjunction with a suitable retrieval algorithm can be used to estimate atmospheric and surface parameters by fitting simulated spectra to the measured data. Because of the limited spectral resolution of VIRTIS-M-IR-spectra, that have been used so far, many different parameter sets can explain the same measurement equally well. As a common regulative measure, reasonable a priori knowledge of some parameters is applied to suppress solutions implausibly far from the expected range. It is beneficial to introduce a parallel coupled retrieval of several measurements. Since spa-tially and temporally contiguous measurements are not expected to originate from completely unrelated parameters, an assumed a priori correlation of the parameters during the retrieval can help to reduce arbitrary fluctuations of the solutions, to avoid subsidiary solutions, and to attenuate the interference of measurement noise by keeping the parameters close to a gen-eral trend. As an illustration, the resulting improvements for some swaths on the Northern hemisphere are presented. Some atmospheric features are still not very well constrained, for instance CO2 absorption under the extreme environmental conditions close to the surface. A broad band continuum due to far wing and collisional induced absorptions is commonly used to correct individual line absorption. Since the spectrally dependent continuum is constant for all measurements, the retrieval of parameters common to all spectra may be used to give some estimates of the continuum absorption. These estimates are necessary, for example, for the coupled parallel retrieval of a consistent local cloud modal composition, which in turn enables a refined surface emissivity retrieval. We gratefully acknowledge the support from the VIRTIS/Venus Express Team, from ASI, CNES, CNRS, and from the DFG funding the ongoing work.
NASA Astrophysics Data System (ADS)
Kudryashova, M.; Rosenblatt, P.; Marty, J.-C.
2015-08-01
The mass of Phobos is an important parameter which, together with second-order gravity field coefficients and libration amplitude, constrains internal structure and nature of the moon. And thus, it needs to be known with high precision. Nevertheless, Phobos mass (GM, more precisely) estimated by different authors based on diverse data-sets and methods, varies by more than their 1-sigma error. The most complete lists of GM values are presented in the works of R. Jacobson (2010) and M. Paetzold et al. (2014) and include the estimations in the interval from (5.39 ± 0:03).10^5 (Smith et al., 1995) till (8.5 ± 0.7).10^5[m^3/s^2] (Williams et al., 1988). Furthermore, even the comparison of the estimations coming from the same estimation procedure applied to the consecutive flybys of the same spacecraft (s/c) shows big variations in GMs. The indicated behavior is very pronounced in the GM estimations stemming from the Viking1 flybys in February 1977 (as well as from MEX flybys, though in a smaller amplitude) and in this work we made an attempt to figure out its roots. The errors of Phobos GM estimations depend on the precision of the model (e.g. accuracy of Phobos a priori ephemeris and its a priori GM value) as well as on the radio-tracking measurements quality (noise, coverage, flyby distance). In the present work we are testing the impact of mentioned above error sources by means of simulations. We also consider the effect of the uncertainties in a priori Phobos positions on the GM estimations from real observations. Apparently, the strategy (i.e. splitting real observations in data-arcs, whether they stem from the close approaches of Phobos by spacecraft or from analysis of the s/c orbit evolution around Mars) of the estimations has an impact on the Phobos GM estimation.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M
1999-05-20
A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.
Crustal dynamics project data analysis fixed station VLBI geodetic results
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.
1985-01-01
The Goddard VLBI group reports the results of analyzing the fixed observatory VLBI data available to the Crustal Dynamics Project through the end of 1984. All POLARIS/IRIS full-day data are included. The mobile site at Platteville, Colorado is also included since its occupation bears on the study of plate stability. Data from 1980 through 1984 were used to obtain the catalog of site and radio source positions labeled S284C. Using this catalog two types of one-day solutions were made: (1) to estimate site and baseline motions; and (2) to estimate Earth rotation parameters. A priori Earth rotation parameters were interpolated to the epoch of each observation from BIH Circular D.
Raiche, Gilles; Blais, Jean-Guy
2009-01-01
In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.
A Comparative Study of Co-Channel Interference Suppression Techniques
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas
1997-01-01
We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
NASA Astrophysics Data System (ADS)
Koffi, A. K.; Gosset, M.; Zahiri, E.-P.; Ochou, A. D.; Kacou, M.; Cazenave, F.; Assamoi, P.
2014-06-01
As part of the African Monsoon Multidisciplinary Analysis (AMMA) field campaign an X-band dual-polarization Doppler radar was deployed in Benin, West-Africa, in 2006 and 2007, together with a reinforced rain gauge network and several optical disdrometers. Based on this data set, a comparative study of several rainfall estimators that use X-band polarimetric radar data is presented. In tropical convective systems as encountered in Benin, microwave attenuation by rain is significant and quantitative precipitation estimation (QPE) at X-band is a challenge. Here, several algorithms based on the combined use of reflectivity, differential reflectivity and differential phase shift are evaluated against rain gauges and disdrometers. Four rainfall estimators were tested on twelve rainy events: the use of attenuation corrected reflectivity only (estimator R(ZH)), the use of the specific phase shift only R(KDP), the combination of specific phase shift and differential reflectivity R(KDP,ZDR) and an estimator that uses three radar parameters R(ZH,ZDR,KDP). The coefficients of the power law relationships between rain rate and radar variables were adjusted either based on disdrometer data and simulation, or on radar-gauges observations. The three polarimetric based algorithms with coefficients predetermined on observations outperform the R(ZH) estimator for rain rates above 10 mm/h which explain most of the rainfall in the studied region. For the highest rain rates (above 30 mm/h) R(KDP) shows even better scores, and given its performances and its simplicity of implementation, is recommended. The radar based retrieval of two parameters of the rain drop size distribution, the normalized intercept parameter NW and the volumetric median diameter Dm was evaluated on four rainy days thanks to disdrometers. The frequency distributions of the two parameters retrieved by the radar are very close to those observed with the disdrometer. NW retrieval based on a combination of ZH-KDP-ZDR works well whatever the a priori assumption made on the drop shapes. Dm retrieval based on ZDR alone performs well, but if satisfactory ZDR measurements are not available, the combination ZH-KDP provides satisfactory results for both Dm and NW if an appropriate a priori assumption on drop shape is made.
Preliminary calculation of solar cosmic ray dose to the female breast in space mission
NASA Technical Reports Server (NTRS)
Shavers, Mark; Poston, John W.; Atwell, William; Hardy, Alva C.; Wilson, John W.
1991-01-01
No regulatory dose limits are specifically assigned for the radiation exposure of female breasts during manned space flight. However, the relatively high radiosensitivity of the glandular tissue of the breasts and its potential exposure to solar flare protons on short- and long-term missions mandate a priori estimation of the associated risks. A model for estimating exposure within the breast is developed for use in future NASA missions. The female breast and torso geometry is represented by a simple interim model. A recently developed proton dose-buildup procedure is used for estimating doses. The model considers geomagnetic shielding, magnetic-storm conditions, spacecraft shielding, and body self-shielding. Inputs to the model include proton energy spectra, spacecraft orbital parameters, STS orbiter-shielding distribution at a given position, and a single parameter allowing for variation in breast size.
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
NASA Astrophysics Data System (ADS)
Chouaib, Wafa; Alila, Younes; Caldwell, Peter V.
2018-05-01
The need for predictions of flow time-series persists at ungauged catchments, motivating the research goals of our study. By means of the Sacramento model, this paper explores the use of parameter transfer within homogeneous regions of similar climate and flow characteristics and makes comparisons with predictions from a priori parameters. We assessed the performance using the Nash-Sutcliffe (NS), bias, mean monthly hydrograph and flow duration curve (FDC). The study was conducted on a large dataset of 73 catchments within the eastern US. Two approaches to the parameter transferability were developed and evaluated; (i) the within homogeneous region parameter transfer using one donor catchment specific to each region, (ii) the parameter transfer disregarding the geographical limits of homogeneous regions, where one donor catchment was common to all regions. Comparisons between both parameter transfers enabled to assess the gain in performance from the parameter regionalization and its respective constraints and limitations. The parameter transfer within homogeneous regions outperformed the a priori parameters and led to a decrease in bias and increase in efficiency reaching a median NS of 0.77 and a NS of 0.85 at individual catchments. The use of FDC revealed the effect of bias on the inaccuracy of prediction from parameter transfer. In one specific region, of mountainous and forested catchments, the prediction accuracy of the parameter transfer was less satisfactory and equivalent to a priori parameters. In this region, the parameter transfer from the outsider catchment provided the best performance; less-biased with smaller uncertainty in medium flow percentiles (40%-60%). The large disparity of energy conditions explained the lack of performance from parameter transfer in this region. Besides, the subsurface stormflow is predominant and there is a likelihood of lateral preferential flow, which according to its specific properties further explained the reduced efficiency. Testing the parameter transferability using criteria of similar climate and flow characteristics at ungauged catchments and comparisons with predictions from a priori parameters are a novelty. The ultimate limitations of both approaches are recognized and recommendations are made for future research.
NASA Astrophysics Data System (ADS)
Rocadenbosch, Francesc; Comeron, Adolfo; Vazquez, Gregori; Rodriguez-Gomez, Alejandro; Soriano, Cecilia; Baldasano, Jose M.
1998-12-01
Up to now, retrieval of the atmospheric extinction and backscatter has mainly relied on standard straightforward non-memory procedures such as slope-method, exponential- curve fitting and Klett's method. Yet, their performance becomes ultimately limited by the inherent lack of adaptability as they only work with present returns and neither past estimations, nor the statistics of the signals or a prior uncertainties are taken into account. In this work, a first inversion of the backscatter and extinction- to-backscatter ratio from pulsed elastic-backscatter lidar returns is tackled by means of an extended Kalman filter (EKF), which overcomes these limitations. Thus, as long as different return signals income,the filter updates itself weighted by the unbalance between the a priori estimates of the optical parameters and the new ones based on a minimum variance criterion. Calibration errors or initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables to retrieve the sought-after optical parameters as time-range-dependent functions and hence, to track the atmospheric evolution, its performance being only limited by the quality and availability of the 'a priori' information and the accuracy of the atmospheric model assumed. The study ends with an encouraging practical inversion of a live-scene measured with the Nd:YAG elastic-backscatter lidar station at our premises in Barcelona.
Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method
NASA Astrophysics Data System (ADS)
Kenderi, Gábor; Fidlin, Alexander
2014-12-01
The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.
Phase estimation without a priori phase knowledge in the presence of loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolodynski, Jan; Demkowicz-Dobrzanski, Rafal
2010-11-15
We find the optimal scheme for quantum phase estimation in the presence of loss when no a priori knowledge on the estimated phase is available. We prove analytically an explicit lower bound on estimation uncertainty, which shows that, as a function of the number of probes, quantum precision enhancement amounts at most to a constant factor improvement over classical strategies.
NASA Astrophysics Data System (ADS)
Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes
2018-04-01
Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
NASA Astrophysics Data System (ADS)
Alipour, M.; Kibler, K. M.
2017-12-01
Despite advances in flow prediction, managers of ungauged rivers located within broad regions of sparse hydrometeorologic observation still lack prescriptive methods robust to the data challenges of such regions. We propose a multi-objective streamflow prediction framework for regions of minimum observation to select models that balance runoff efficiency with choice of accurate parameter values. We supplement sparse observed data with uncertain or low-resolution information incorporated as `soft' a priori parameter estimates. The performance of the proposed framework is tested against traditional single-objective and constrained single-objective calibrations in two catchments in a remote area of southwestern China. We find that the multi-objective approach performs well with respect to runoff efficiency in both catchments (NSE = 0.74 and 0.72), within the range of efficiencies returned by other models (NSE = 0.67 - 0.78). However, soil moisture capacity estimated by the multi-objective model resonates with a priori estimates (parameter residuals of 61 cm versus 289 and 518 cm for maximum soil moisture capacity in one catchment, and 20 cm versus 246 and 475 cm in the other; parameter residuals of 0.48 versus 0.65 and 0.7 for soil moisture distribution shape factor in one catchment, and 0.91 versus 0.79 and 1.24 in the other). Thus, optimization to a multi-criteria objective function led to very different representations of soil moisture capacity as compared to models selected by single-objective calibration, without compromising runoff efficiency. These different soil moisture representations may translate into considerably different hydrological behaviors. The proposed approach thus offers a preliminary step towards greater process understanding in regions of severe data limitations. For instance, the multi-objective framework may be an adept tool to discern between models of similar efficiency to select models that provide the "right answers for the right reasons". Managers may feel more confident to utilize such models to predict flows in fully ungauged areas.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Quaternion normalization in spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.
1993-01-01
Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.
Discharge prediction in the Upper Senegal River using remote sensing data
NASA Astrophysics Data System (ADS)
Ceccarini, Iacopo; Raso, Luciano; Steele-Dunne, Susan; Hrachowitz, Markus; Nijzink, Remko; Bodian, Ansoumana; Claps, Pierluigi
2017-04-01
The Upper Senegal River, West Africa, is a poorly gauged basin. Nevertheless, discharge predictions are required in this river for the optimal operation of the downstream Manantali reservoir, flood forecasting, development plans for the entire basin and studies for adaptation to climate change. Despite the need for reliable discharge predictions, currently available rainfall-runoff models for this basin provide only poor performances, particularly during extreme regimes, both low-flow and high-flow. In this research we develop a rainfall-runoff model that combines remote-sensing input data and a-priori knowledge on catchment physical characteristics. This semi-distributed model, is based on conceptual numerical descriptions of hydrological processes at the catchment scale. Because of the lack of reliable input data from ground observations, we use the Tropical Rainfall Measuring Mission (TRMM) remote-sensing data for precipitation and the Global Land Evaporation Amsterdam Model (GLEAM) for the terrestrial potential evaporation. The model parameters are selected by a combination of calibration, by match of observed output and considering a large set of hydrological signatures, as well as a-priori knowledge on the catchment. The Generalized Likelihood Uncertainty Estimation (GLUE) method was used to choose the most likely range in which the parameter sets belong. Analysis of different experiments enhances our understanding on the added value of distributed remote-sensing data and a-priori information in rainfall-runoff modelling. Results of this research will be used for decision making at different scales, contributing to a rational use of water resources in this river.
ERIC Educational Resources Information Center
Raiche, Gilles; Blais, Jean-Guy
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
A meta-learning system based on genetic algorithms
NASA Astrophysics Data System (ADS)
Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain
2004-04-01
The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.
Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam Ali
2006-01-01
A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-01-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component. PMID:27043580
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-04-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component.
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC
NASA Technical Reports Server (NTRS)
Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark
2013-01-01
As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.
Evaluating mallard adaptive management models with time series
Conn, P.B.; Kendall, W.L.
2004-01-01
Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these suggestions should help lower the probability of erroneous learning in mallard ABM and adaptive management in general.
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
Bayesian estimation inherent in a Mexican-hat-type neural network
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Remaining lifetime modeling using State-of-Health estimation
NASA Astrophysics Data System (ADS)
Beganovic, Nejra; Söffker, Dirk
2017-08-01
Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
NASA Astrophysics Data System (ADS)
Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.
2013-12-01
Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.
Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation
NASA Astrophysics Data System (ADS)
Choi, J.; Raguin, L. G.
2010-10-01
Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. Tinsley
1993-01-01
A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.
Testing a new Free Core Nutation empirical model
NASA Astrophysics Data System (ADS)
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean
NASA Astrophysics Data System (ADS)
Yin, Mengtao; Liu, Guosheng
2017-12-01
A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data
NASA Technical Reports Server (NTRS)
Lin, G. C.; Minter, T. C.
1976-01-01
Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.
Model identification of signal transduction networks from data using a state regulator problem.
Gadkar, K G; Varner, J; Doyle, F J
2005-03-01
Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
NASA Astrophysics Data System (ADS)
Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.
2017-12-01
Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.
Contrast detection in fluid-saturated media with magnetic resonance poroelastography
Perriñez, Phillip R.; Pattison, Adam J.; Kennedy, Francis E.; Weaver, John B.; Paulsen, Keith D.
2010-01-01
Purpose: Recent interest in the poroelastic behavior of tissues has led to the development of magnetic resonance poroelastography (MRPE) as an alternative to single-phase MR elastographic image reconstruction. In addition to the elastic parameters (i.e., Lamé’s constants) commonly associated with magnetic resonance elastography (MRE), MRPE enables estimation of the time-harmonic pore-pressure field induced by external mechanical vibration. Methods: This study presents numerical simulations that demonstrate the sensitivity of the computed displacement and pore-pressure fields to a priori estimates of the experimentally derived model parameters. In addition, experimental data collected in three poroelastic phantoms are used to assess the quantitative accuracy of MR poroelastographic imaging through comparisons with both quasistatic and dynamic mechanical tests. Results: The results indicate hydraulic conductivity to be the dominant parameter influencing the deformation behavior of poroelastic media under conditions applied during MRE. MRPE estimation of the matrix shear modulus was bracketed by the values determined from independent quasistatic and dynamic mechanical measurements as expected, whereas the contrast ratios for embedded inclusions were quantitatively similar (10%–15% difference between the reconstructed images and the mechanical tests). Conclusions: The findings suggest that the addition of hydraulic conductivity and a viscoelastic solid component as parameters in the reconstruction may be warranted. PMID:20831058
Statistical fusion of continuous labels: identification of cardiac landmarks
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.
2011-03-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A
2011-01-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors
NASA Astrophysics Data System (ADS)
Slater, David M.; Jacyna, Garry M.
2013-05-01
In an effort to secure the northern and southern United States borders, MITRE has been tasked with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness (MOE) for current/future surveillance systems deployed by the the Customs and Border Protection Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors (UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle's trajectory such as bearing and speed. The first MOP considered is the probability of detection. We derive probabilities of detection for a network of sensors over an arbitrary number of observation periods and explore how the probability of detection changes when multiple sensors are employed. The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters in two cases: when no a priori information is known and when the parameters are assumed to be Gaussian with known variances. Sample results show that UGS perform significantly better in the latter case.
[Evaluation and prognosis of occupational risk in workers of nonferrous metallurgy enterprises].
Shliapnikov, D M; Kostarev, V G
2014-01-01
The article deals with results of a priori and a posteriori evaluation of occupational risk for workers' health. Categories of a priori occupational risk for workers are estimated as high to very high (intolerable) risk. Findings are that work conditions in nonferrous metallurgy workshop result in upper respiratory tract diseases (medium degree of occupational conditionality). Increased prevalence of such diseases among the workers is connected with length of service. The authors revealed priority factors for occupationally conditioned diseases. A promising approach in occupational medicine is creation of methods to evaluate and forecast occupational risk, that enable to specify goal parameters for prophylactic measures. For example, modelling the risk of occupationally conditioned diseases via changes in exposure to occupational factor and length of service proved that decrease of chemical concentrations in air of workplace to maximally allowable ones lowers risk of respiratory diseases from 14 to 6 cases per year, for length of service of 5 years and population risk.
NASA Astrophysics Data System (ADS)
Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon
This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.
Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials
NASA Astrophysics Data System (ADS)
Cameron, Stephen; Silvestre, Luis; Snelson, Stanley
2018-05-01
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
A variant of sparse partial least squares for variable selection and data exploration.
Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina
2014-01-01
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.
Differential surface models for tactile perception of shape and on-line tracking of features
NASA Technical Reports Server (NTRS)
Hemami, H.
1987-01-01
Tactile perception of shape involves an on-line controller and a shape perceptor. The purpose of the on-line controller is to maintain gliding or rolling contact with the surface, and collect information, or track specific features of the surface such as edges of a certain sharpness. The shape perceptor uses the information to perceive, estimate the parameters of, or recognize the shape. The differential surface model depends on the information collected and on the a priori information known about the robot and its physical parameters. These differential models are certain functionals that are projections of the dynamics of the robot onto the surface gradient or onto the tangent plane. A number of differential properties may be directly measured from present day tactile sensors. Others may have to be indirectly computed from measurements. Others may constitute design objectives for distributed tactile sensors of the future. A parameterization of the surface leads to linear and nonlinear sequential parameter estimation techniques for identification of the surface. Many interesting compromises between measurement and computation are possible.
NASA Technical Reports Server (NTRS)
Kuchynka, P.; Laskar, J.; Fienga, A.
2011-01-01
Mars ranging observations are available over the past 10 years with an accuracy of a few meters. Such precise measurements of the Earth-Mars distance provide valuable constraints on the masses of the asteroids perturbing both planets. Today more than 30 asteroid masses have thus been estimated from planetary ranging data (see [1] and [2]). Obtaining unbiased mass estimations is nevertheless difficult. Various systematic errors can be introduced by imperfect reduction of spacecraft tracking observations to planetary ranging data. The large number of asteroids and the limited a priori knowledge of their masses is also an obstacle for parameter selection. Fitting in a model a mass of a negligible perturber, or on the contrary omitting a significant perturber, will induce important bias in determined asteroid masses. In this communication, we investigate a simplified version of the mass determination problem. Instead of planetary ranging observations from spacecraft or radar data, we consider synthetic ranging observations generated with the INPOP [2] ephemeris for a test model containing 25000 asteroids. We then suggest a method for optimal parameter selection and estimation in this simplified framework.
Multispectral guided fluorescence diffuse optical tomography using upconverting nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svenmarker, Pontus, E-mail: pontus.svenmarker@physics.umu.se; Department of Physics, Umeå University, SE-901 87 Umeå; Centre for Microbial Research
2014-02-17
We report on improved image detectability for fluorescence diffuse optical tomography using upconverting nanoparticles doped with rare-earth elements. Core-shell NaYF{sub 4}:Yb{sup 3+}/Er{sup 3+}@NaYF{sub 4} upconverting nanoparticles were synthesized through a stoichiometric method. The Yb{sup 3+}/Er{sup 3+} sensitizer-activator pair yielded two anti-Stokes shifted fluorescence emission bands at 540 nm and 660 nm, here used to a priori estimate the fluorescence source depth with sub-millimeter precision. A spatially varying regularization incorporated the a priori fluorescence source depth estimation into the tomography reconstruction scheme. Tissue phantom experiments showed both an improved resolution and contrast in the reconstructed images as compared to not using any amore » priori information.« less
Contribution of Apollo lunar photography to the establishment of selenodetic control
NASA Technical Reports Server (NTRS)
Dermanis, A.
1975-01-01
Among the various types of available data relevant to the establishment of geometric control on the moon, the only one covering significant portions of the lunar surface (20%) with sufficient information content, is lunar photography, taken at the proximity of the moon from lunar orbiters. The idea of free geodetic networks is introduced as a tool for the statistical comparison of the geometric aspects of the various data used. Methods were developed for the updating of the statistics of observations and the a priori parameter estimates to obtain statistically consistent solutions by means of the optimum relative weighting concept.
Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio
We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Ankowski, Artur M.; Benhar, Omar; Coloma, Pilar; ...
2015-10-22
To be able to achieve their physics goals, future neutrino-oscillation experiments will need to reconstruct the neutrino energy with very high accuracy. In this work, we analyze how the energy reconstruction may be affected by realistic detection capabilities, such as energy resolutions, efficiencies, and thresholds. This allows us to estimate how well the detector performance needs to be determined a priori in order to avoid a sizable bias in the measurement of the relevant oscillation parameters. We compare the kinematic and calorimetric methods of energy reconstruction in the context of two ν μ → ν μ disappearance experiments operating inmore » different energy regimes. For the calorimetric reconstruction method, we find that the detector performance has to be estimated with an O(10%) accuracy to avoid a significant bias in the extracted oscillation parameters. Thus, in the case of kinematic energy reconstruction, we observe that the results exhibit less sensitivity to an overestimation of the detector capabilities.« less
GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models.
Ligon, Thomas S; Fröhlich, Fabian; Chis, Oana T; Banga, Julio R; Balsa-Canto, Eva; Hasenauer, Jan
2018-04-15
Mathematical modeling using ordinary differential equations is used in systems biology to improve the understanding of dynamic biological processes. The parameters of ordinary differential equation models are usually estimated from experimental data. To analyze a priori the uniqueness of the solution of the estimation problem, structural identifiability analysis methods have been developed. We introduce GenSSI 2.0, an advancement of the software toolbox GenSSI (Generating Series for testing Structural Identifiability). GenSSI 2.0 is the first toolbox for structural identifiability analysis to implement Systems Biology Markup Language import, state/parameter transformations and multi-experiment structural identifiability analysis. In addition, GenSSI 2.0 supports a range of MATLAB versions and is computationally more efficient than its previous version, enabling the analysis of more complex models. GenSSI 2.0 is an open-source MATLAB toolbox and available at https://github.com/genssi-developer/GenSSI. thomas.ligon@physik.uni-muenchen.de or jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.
Rutterford, Clare; Taljaard, Monica; Dixon, Stephanie; Copas, Andrew; Eldridge, Sandra
2015-06-01
To assess the quality of reporting and accuracy of a priori estimates used in sample size calculations for cluster randomized trials (CRTs). We reviewed 300 CRTs published between 2000 and 2008. The prevalence of reporting sample size elements from the 2004 CONSORT recommendations was evaluated and a priori estimates compared with those observed in the trial. Of the 300 trials, 166 (55%) reported a sample size calculation. Only 36 of 166 (22%) reported all recommended descriptive elements. Elements specific to CRTs were the worst reported: a measure of within-cluster correlation was specified in only 58 of 166 (35%). Only 18 of 166 articles (11%) reported both a priori and observed within-cluster correlation values. Except in two cases, observed within-cluster correlation values were either close to or less than a priori values. Even with the CONSORT extension for cluster randomization, the reporting of sample size elements specific to these trials remains below that necessary for transparent reporting. Journal editors and peer reviewers should implement stricter requirements for authors to follow CONSORT recommendations. Authors should report observed and a priori within-cluster correlation values to enable comparisons between these over a wider range of trials. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo
2014-11-01
One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Spicakova, H.; Plank, L.; Nilsson, T.; Böhm, J.; Schuh, H.
2011-07-01
The Vienna VLBI Software (VieVS) has been developed at the Institute of Geodesy and Geophysics at TU Vienna since 2008. In this presentation, we present the module Vie_glob which is the part of VieVS that allows the parameter estimation from multiple VLBI sessions in a so-called global solution. We focus on the determination of the terrestrial reference frame (TRF) using all suitable VLBI sessions since 1984. We compare different analysis options like the choice of loading corrections or of one of the models for the tropospheric delays. The effect of atmosphere loading corrections on station heights if neglected at observation level will be shown. Time series of station positions (using a previously determined TRF as a priori values) are presented and compared to other estimates of site positions from individual IVS (International VLBI Service for Geodesy and Astrometry) Analysis Centers.
Least Squares Solution of Small Sample Multiple-Master PSInSAR System
NASA Astrophysics Data System (ADS)
Zhang, Lei; Ding, Xiao Li; Lu, Zhong
2010-03-01
In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.
Using the GOCE star trackers for validating the calibration of its accelerometers
NASA Astrophysics Data System (ADS)
Visser, P. N. A. M.
2017-12-01
A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.
NASA Astrophysics Data System (ADS)
Suvanto, K.
1990-07-01
Statistical inversion theory is employed to estimate parameter uncertainties in incoherent scatter radar studies of non-Maxwellian ionospheric plasma. Measurement noise and the inexact nature of the plasma model are considered as potential sources of error. In most of the cases investigated here, it is not possible to determine electron density, line-of-sight ion and electron temperatures, ion composition, and two non-Maxwellian shape factors simultaneously. However, if the molecular ion velocity distribution is highly non-Maxwellian, all these quantities can sometimes be retrieved from the data. This theoretical result supports the validity of the only successful non-Maxwellian, mixed-species fit discussed in the literature. A priori information on one of the parameters, e.g., the electron density, often reduces the parameter uncertainties significantly and makes composition fits possible even if the six-parameter fit cannot be performed. However, small (less than 0.5) non-Maxwellian shape factors remain difficult to distinguish.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
NASA Astrophysics Data System (ADS)
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
[Population pharmacokinetics applied to optimising cisplatin doses in cancer patients].
Ramón-López, A; Escudero-Ortiz, V; Carbonell, V; Pérez-Ruixo, J J; Valenzuela, B
2012-01-01
To develop and internally validate a population pharmacokinetics model for cisplatin and assess its prediction capacity for personalising doses in cancer patients. Cisplatin plasma concentrations in forty-six cancer patients were used to determine the pharmacokinetic parameters of a two-compartment pharmacokinetic model implemented in NONMEN VI software. Pharmacokinetic parameter identification capacity was assessed using the parametric bootstrap method and the model was validated using the nonparametric bootstrap method and standardised visual and numerical predictive checks. The final model's prediction capacity was evaluated in terms of accuracy and precision during the first (a priori) and second (a posteriori) chemotherapy cycles. Mean population cisplatin clearance is 1.03 L/h with an interpatient variability of 78.0%. Estimated distribution volume at steady state was 48.3 L, with inter- and intrapatient variabilities of 31,3% and 11,7%, respectively. Internal validation confirmed that the population pharmacokinetics model is appropriate to describe changes over time in cisplatin plasma concentrations, as well as its variability in the study population. The accuracy and precision of a posteriori prediction of cisplatin concentrations improved by 21% and 54% compared to a priori prediction. The population pharmacokinetic model developed adequately described the changes in cisplatin plasma concentrations in cancer patients and can be used to optimise cisplatin dosing regimes accurately and precisely. Copyright © 2011 SEFH. Published by Elsevier Espana. All rights reserved.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2018-06-01
The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.
2011-01-01
Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614
Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy
2011-08-19
Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.
Required experimental accuracy to select between supersymmetrical models
NASA Astrophysics Data System (ADS)
Grellscheid, David
2004-03-01
We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. This talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).
A Comparison of Methods for a Priori Bias Correction in Soil Moisture Data Assimilation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.
2011-01-01
Data assimilation is being increasingly used to merge remotely sensed land surface variables such as soil moisture, snow and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here, a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (i) parameter estimation to calibrate the land model to the climatology of the soil moisture observations, and (ii) scaling of the observations to the model s soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model s climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.
State estimation with incomplete nonlinear constraint
NASA Astrophysics Data System (ADS)
Huang, Yuan; Wang, Xueying; An, Wei
2017-10-01
A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Adams, Vanessa M.; Segan, Daniel B.; Pressey, Robert L.
2011-01-01
Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach - one that safeguards against large cost increases while allowing for potential financial windfalls - would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability. PMID:21980459
Adams, Vanessa M; Segan, Daniel B; Pressey, Robert L
2011-01-01
Many governments have recently gone on record promising large-scale expansions of protected areas to meet global commitments such as the Convention on Biological Diversity. As systems of protected areas are expanded to be more comprehensive, they are more likely to be implemented if planners have realistic budget estimates so that appropriate funding can be requested. Estimating financial budgets a priori must acknowledge the inherent uncertainties and assumptions associated with key parameters, so planners should recognize these uncertainties by estimating ranges of potential costs. We explore the challenge of budgeting a priori for protected area expansion in the face of uncertainty, specifically considering the future expansion of protected areas in Queensland, Australia. The government has committed to adding ∼12 million ha to the reserve system, bringing the total area protected to 20 million ha by 2020. We used Marxan to estimate the costs of potential reserve designs with data on actual land value, market value, transaction costs, and land tenure. With scenarios, we explored three sources of budget variability: size of biodiversity objectives; subdivision of properties; and legal acquisition routes varying with tenure. Depending on the assumptions made, our budget estimates ranged from $214 million to $2.9 billion. Estimates were most sensitive to assumptions made about legal acquisition routes for leasehold land. Unexpected costs (costs encountered by planners when real-world costs deviate from assumed costs) responded non-linearly to inability to subdivide and percentage purchase of private land. A financially conservative approach--one that safeguards against large cost increases while allowing for potential financial windfalls--would involve less optimistic assumptions about acquisition and subdivision to allow Marxan to avoid expensive properties where possible while meeting conservation objectives. We demonstrate how a rigorous analysis can inform discussions about the expansion of systems of protected areas, including the identification of factors that influence budget variability.
NASA Astrophysics Data System (ADS)
Balidakis, Kyriakos; Nilsson, Tobias; Heinkelmann, Robert; Glaser, Susanne; Zus, Florian; Deng, Zhiguo; Schuh, Harald
2017-04-01
The quality of the parameters estimated by global navigation satellite systems (GNSS) and very long baseline interferometry (VLBI) are distorted by erroneous meteorological observations applied to model the propagation delay in the electrically neutral atmosphere. For early VLBI sessions with poor geometry, unsuitable constraints imposed on the a priori tropospheric gradients is a source of additional hassle of VLBI analysis. Therefore, climate change indicators deduced from the geodetic analysis, such as the long-term precipitable water vapor (PWV) trends, are strongly affected. In this contribution we investigate the impact of different modeling and parameterization of the propagation delay in the troposphere on the estimates of long-term PWV trends from geodetic VLBI analysis results. We address the influence of the meteorological data source, and of the a priori non-hydrostatic delays and gradients employed in the VLBI processing, on the estimated PWV trends. In particular, we assess the effect of employing temperature and pressure from (i) homogenized in situ observations, (ii) the model levels of the ERA Interim reanalysis numerical weather model and (iii) our own blind model in the style of GPT2w with enhanced parameterization, calculated using the latter data set. Furthermore, we utilize non-hydrostatic delays and gradients estimated from (i) a GNSS reprocessing at GeoForschungsZentrum Potsdam, rigorously considering tropospheric ties, and (ii)) direct ray-tracing through ERA Interim, as additional observations. To evaluate the above, the least-squares module of the VieVS@GFZ VLBI software was appropriately modified. Additionally, we study the noise characteristics of the non-hydrostatic delays and gradients estimated from our VLBI and GNSS analyses as well as from ray-tracing. We have modified the Theil-Sen estimator appropriately to robustly deduce PWV trends from VLBI, GNSS, ray-tracing and direct numerical integration in ERA Interim. We disseminate all our solutions in the latest Tropo-SINEX format.
Relaxation limit of a compressible gas-liquid model with well-reservoir interaction
NASA Astrophysics Data System (ADS)
Solem, Susanne; Evje, Steinar
2017-02-01
This paper deals with the relaxation limit of a two-phase compressible gas-liquid model which contains a pressure-dependent well-reservoir interaction term of the form q (P_r - P) where q>0 is the rate of the pressure-dependent influx/efflux of gas, P is the (unknown) wellbore pressure, and P_r is the (known) surrounding reservoir pressure. The model can be used to study gas-kick flow scenarios relevant for various wellbore operations. One extreme case is when the wellbore pressure P is largely dictated by the surrounding reservoir pressure P_r. Formally, this model is obtained by deriving the limiting system as the relaxation parameter q in the full model tends to infinity. The main purpose of this work is to understand to what extent this case can be represented by a well-defined mathematical model for a fixed global time T>0. Well-posedness of the full model has been obtained in Evje (SIAM J Math Anal 45(2):518-546, 2013). However, as the estimates for the full model are dependent on the relaxation parameter q, new estimates must be obtained for the equilibrium model to ensure existence of solutions. By means of appropriate a priori assumptions and some restrictions on the model parameters, necessary estimates (low order and higher order) are obtained. These estimates that depend on the global time T together with smallness assumptions on the initial data are then used to obtain existence of solutions in suitable Sobolev spaces.
Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling
NASA Astrophysics Data System (ADS)
McCullough, C.; Bettadpur, S. V.
2016-12-01
Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.
Interiors of Enceladus and Rhea
NASA Technical Reports Server (NTRS)
Rappaport, N. J.; Iess, L.; Tortora, P.; Lunine, J. I.; Armstrong, J. W.; Asmar, S. W.; Somenzi, L.; Zingoni, F.
2006-01-01
Measurement method and data set: Gravity field parameters determined by means of range rate measurements over multiple arcs across flyby. Optical imaging not required when reliable a priori estimates of spacecraft state vector are available. Interior of Enceladus: Density of 1605 +/-14 kg/cu m, higher than pre-Cassini estimates, requires a substantial amount of rock to warmer interior to enhance likelihood of differentiation of water from rock-metal. Assume no porosity. Assuming Io s mean density for the rock-metal component, one finds its fractional mass to be 0.52+/-0.06. There is evidence that Enceladus may be differentiated: a) Areas devoid of craters must be geologically young. b) Systems of ridges, fractures, and groove indicate that the surface has been tectonically altered. c) Viscous relaxation of craters has occurred, and d) The plumes near the South pole indicate venting of subsurface volatiles.
NASA Astrophysics Data System (ADS)
Stark, Martin; Guckenberger, Reinhard; Stemmer, Andreas; Stark, Robert W.
2005-12-01
Dynamic atomic force microscopy (AFM) offers many opportunities for the characterization and manipulation of matter on the nanometer scale with a high temporal resolution. The analysis of time-dependent forces is basic for a deeper understanding of phenomena such as friction, plastic deformation, and surface wetting. However, the dynamic characteristics of the force sensor used for such investigations are determined by various factors such as material and geometry of the cantilever, detection alignment, and the transfer characteristics of the detector. Thus, for a quantitative investigation of surface properties by dynamic AFM an appropriate system identification procedure is required, characterizing the force sensor beyond the usual parameters spring constant, quality factor, and detection sensitivity. Measurement of the transfer function provides such a characterization that fully accounts for the dynamic properties of the force sensor. Here, we demonstrate the estimation of the transfer function in a bandwidth of 1MHz from experimental data. To this end, we analyze the signal of the vibrations induced by snap-to-contact and snap-off-contact events. For the free cantilever, we determine both a parameter-free estimate [empirical transfer function estimate (ETFE)] and a parametric estimate of the transfer function. For the surface-coupled cantilever the ETFE is obtained. These identification procedures provide an intrinsic calibration as they dispense largely with a priori knowledge about the force sensor.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Uncertainty Evaluation and Appropriate Distribution for the RDHM in the Rockies
NASA Astrophysics Data System (ADS)
Kim, J.; Bastidas, L. A.; Clark, E. P.
2010-12-01
The problems that hydrologic models have in properly reproducing the processes involved in mountainous areas, and in particular the Rocky Mountains, are widely acknowledged. Herein, we present an application of the National Weather Service RDHM distributed model over the Durango River basin in Colorado. We focus primarily in the assessment of the model prediction uncertainty associated with the parameter estimation and the comparison of the model performance using parameters obtained with a priori estimation following the procedure of Koren et al., and those obtained via inverse modeling using a variety of Markov chain Monte Carlo based optimization algorithms. The model evaluation is based on traditional procedures as well as non-traditional ones based on the use of shape matching functions, which are more appropriate for the evaluation of distributed information (e.g. Hausdorff distance, earth movers distance). The variables used for the model performance evaluation are discharge (with internal nodes), snow cover and snow water equivalent. An attempt to establish the proper degree of distribution, for the Durango basin with the RDHM model, is also presented.
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
NASA Astrophysics Data System (ADS)
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2018-01-01
Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2015-04-01
A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.
Rodriguez, Brian D.
2017-03-31
This report summarizes the results of three-dimensional (3-D) resistivity inversion simulations that were performed to account for local 3-D distortion of the electric field in the presence of 3-D regional structure, without any a priori information on the actual 3-D distribution of the known subsurface geology. The methodology used a 3-D geologic model to create a 3-D resistivity forward (“known”) model that depicted the subsurface resistivity structure expected for the input geologic configuration. The calculated magnetotelluric response of the modeled resistivity structure was assumed to represent observed magnetotelluric data and was subsequently used as input into a 3-D resistivity inverse model that used an iterative 3-D algorithm to estimate 3-D distortions without any a priori geologic information. A publicly available inversion code, WSINV3DMT, was used for all of the simulated inversions, initially using the default parameters, and subsequently using adjusted inversion parameters. A semiautomatic approach of accounting for the static shift using various selections of the highest frequencies and initial models was also tested. The resulting 3-D resistivity inversion simulation was compared to the “known” model and the results evaluated. The inversion approach that produced the lowest misfit to the various local 3-D distortions was an inversion that employed an initial model volume resistivity that was nearest to the maximum resistivities in the near-surface layer.
Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia
2012-01-01
Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Estimation of the interior parameters from Mars nutations and from Doppler measurements
NASA Astrophysics Data System (ADS)
Yseboodt, M.; Rivoldini, A.; Le Maistre, S.; Dehant, V. M. A.
2017-12-01
The presence of a liquid core inside Mars changes the nutations: the nutation amplitudes can be resonantly amplified because of a free mode, called the free core nutation (FCN).We quantify how the internal structure, in particular the size of the core, affects the nutation amplifications and the Doppler observable between a Martian lander and the Earth.Present day core size estimates suggest that the effect is the largest on the prograde semi-annual and retrograde ter-annual nutation.We solve the inverse problem assuming a given precision on the nutation amplifications provided by an extensive set of geodesy measurements and we estimate the precision on the core properties. Such measurements will be available in the near future thanks to the geodesy experiments RISE (InSight mission) and LaRa (ExoMars mission).We find that the precision on the core properties is very dependent on the proximity of the FCN period to the ter-annual forcing (-229 days) and the assumed a priori precision on the nutations.
Multichannel blind deconvolution of spatially misaligned images.
Sroubek, Filip; Flusser, Jan
2005-07-01
Existing multichannel blind restoration techniques assume perfect spatial alignment of channels, correct estimation of blur size, and are prone to noise. We developed an alternating minimization scheme based on a maximum a posteriori estimation with a priori distribution of blurs derived from the multichannel framework and a priori distribution of original images defined by the variational integral. This stochastic approach enables us to recover the blurs and the original image from channels severely corrupted by noise. We observe that the exact knowledge of the blur size is not necessary, and we prove that translation misregistration up to a certain extent can be automatically removed in the restoration process.
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
Six-hourly time series of horizontal troposphere gradients in VLBI analyis
NASA Astrophysics Data System (ADS)
Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes
2016-04-01
Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.
Optical Estimation of the 3D Shape of a Solar Illuminated, Reflecting Satellite Surface
NASA Astrophysics Data System (ADS)
Antolin, J.; Yu, Z.; Prasad, S.
2016-09-01
The spatial distribution of the polarized component of the power reflected by a macroscopically smooth but microscopically roughened curved surface under highly directional illumination, as characterized by an appropriate bi-directional reflectance distribution function (BRDF), carries information about the three-dimensional (3D) shape of the surface. This information can be exploited to recover the surface shape locally under rather general conditions whenever power reflectance data for at least two different illumination or observation directions can be obtained. We present here two different parametric approaches for surface reconstruction, amounting to the recovery of the surface parameters that are either the global parameters of the family to which the surface is known a priori to belong or the coefficients of a low-order polynomial that can be employed to characterize a smoothly varying surface locally over the observed patch.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
High-precision radiometric tracking for planetary approach and encounter in the inner solar system
NASA Technical Reports Server (NTRS)
Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.
1989-01-01
The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing
2013-11-18
Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications.
Filtering observations without the initial guess
NASA Astrophysics Data System (ADS)
Chin, T. M.; Abbondanza, C.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; Soja, B.; Wu, X.
2017-12-01
Noisy geophysical observations sampled irregularly over space and time are often numerically "analyzed" or "filtered" before scientific usage. The standard analysis and filtering techniques based on the Bayesian principle requires "a priori" joint distribution of all the geophysical parameters of interest. However, such prior distributions are seldom known fully in practice, and best-guess mean values (e.g., "climatology" or "background" data if available) accompanied by some arbitrarily set covariance values are often used in lieu. It is therefore desirable to be able to exploit efficient (time sequential) Bayesian algorithms like the Kalman filter while not forced to provide a prior distribution (i.e., initial mean and covariance). An example of this is the estimation of the terrestrial reference frame (TRF) where requirement for numerical precision is such that any use of a priori constraints on the observation data needs to be minimized. We will present the Information Filter algorithm, a variant of the Kalman filter that does not require an initial distribution, and apply the algorithm (and an accompanying smoothing algorithm) to the TRF estimation problem. We show that the information filter allows temporal propagation of partial information on the distribution (marginal distribution of a transformed version of the state vector), instead of the full distribution (mean and covariance) required by the standard Kalman filter. The information filter appears to be a natural choice for the task of filtering observational data in general cases where prior assumption on the initial estimate is not available and/or desirable. For application to data assimilation problems, reduced-order approximations of both the information filter and square-root information filter (SRIF) have been published, and the former has previously been applied to a regional configuration of the HYCOM ocean general circulation model. Such approximation approaches are also briefed in the presentation.
Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing
2013-01-01
Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190
Sonka, Milan; Abramoff, Michael D.
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR. PMID:24222760
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Morway, E. D.; Healy, R. W.
2016-12-01
Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Lam, Clifford; Fan, Jianqing
2009-01-01
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
Determination of Eros Physical Parameters for Near Earth Asteroid Rendezvous Orbit Phase Navigation
NASA Technical Reports Server (NTRS)
Miller, J. K.; Antreasian, P. J.; Georgini, J.; Owen, W. M.; Williams, B. G.; Yeomans, D. K.
1995-01-01
Navigation of the orbit phase of the Near Earth steroid Rendezvous (NEAR) mission will re,quire determination of certain physical parameters describing the size, shape, gravity field, attitude and inertial properties of Eros. Prior to launch, little was known about Eros except for its orbit which could be determined with high precision from ground based telescope observations. Radar bounce and light curve data provided a rough estimate of Eros shape and a fairly good estimate of the pole, prime meridian and spin rate. However, the determination of the NEAR spacecraft orbit requires a high precision model of Eros's physical parameters and the ground based data provides only marginal a priori information. Eros is the principal source of perturbations of the spacecraft's trajectory and the principal source of data for determining the orbit. The initial orbit determination strategy is therefore concerned with developing a precise model of Eros. The original plan for Eros orbital operations was to execute a series of rendezvous burns beginning on December 20,1998 and insert into a close Eros orbit in January 1999. As a result of an unplanned termination of the rendezvous burn on December 20, 1998, the NEAR spacecraft continued on its high velocity approach trajectory and passed within 3900 km of Eros on December 23, 1998. The planned rendezvous burn was delayed until January 3, 1999 which resulted in the spacecraft being placed on a trajectory that slowly returns to Eros with a subsequent delay of close Eros orbital operations until February 2001. The flyby of Eros provided a brief glimpse and allowed for a crude estimate of the pole, prime meridian and mass of Eros. More importantly for navigation, orbit determination software was executed in the landmark tracking mode to determine the spacecraft orbit and a preliminary shape and landmark data base has been obtained. The flyby also provided an opportunity to test orbit determination operational procedures that will be used in February of 2001. The initial attitude and spin rate of Eros, as well as estimates of reference landmark locations, are obtained from images of the asteroid. These initial estimates are used as a priori values for a more precise refinement of these parameters by the orbit determination software which combines optical measurements with Doppler tracking data to obtain solutions for the required parameters. As the spacecraft is maneuvered; closer to the asteroid, estimates of spacecraft state, asteroid attitude, solar pressure, landmark locations and Eros physical parameters including mass, moments of inertia and gravity harmonics are determined with increasing precision. The determination of the elements of the inertia tensor of the asteroid is critical to spacecraft orbit determination and prediction of the asteroid attitude. The moments of inertia about the principal axes are also of scientific interest since they provide some insight into the internal mass distribution. Determination of the principal axes moments of inertia will depend on observing free precession in the asteroid's attitude dynamics. Gravity harmonics are in themselves of interest to science. When compared with the asteroid shape, some insight may be obtained into Eros' internal structure. The location of the center of mass derived from the first degree harmonic coefficients give a direct indication of overall mass distribution. The second degree harmonic coefficients relate to the radial distribution of mass. Higher degree harmonics may be compared with surface features to gain additional insight into mass distribution. In this paper, estimates of Eros physical parameters obtained from the December 23,1998 flyby will be presented. This new knowledge will be applied to simplification of Eros orbital operations in February of 2001. The resulting revision to the orbit determination strategy will also be discussed.
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
Improved Bonner sphere neutron spectrometry measurements for the nuclear industry
NASA Astrophysics Data System (ADS)
Roberts, N. J.; Thomas, D. J.; Visser, T. P. P.
2017-11-01
A novel, two-stage approach has been developed for producing the a priori spectrum for Bonner sphere unfolding in a case where neutrons are produced by spontaneous fission and (α,n) reactions, e.g. in UF6. The code SOURCES 4C is first used to obtain the energy spectrum of the neutrons inside the material, which is then fed into a MCNP model of the entire geometry to derive the neutron spectrum at the location of the Bonner sphere. Using this as the a priori spectrum produces a much more detailed unfolded Bonner sphere spectrum retaining fine structure from the calculation that would not be present if a simple estimated spectrum had been used as the a priori spectrum. This is illustrated using a Bonner sphere measurement of the neutron energy spectrum produced by a 48Y cylinder of UF6. From the unfolded spectrum an estimate has been made of the neutron ambient dose equivalent, i.e. the quantity which a neutron survey instrument should measure. The difference in the ambient dose equivalent of the unfolded spectrum is over 10% when using the novel approach instead of using a simpler estimate consisting of a single high energy peak, 1/E continuum, and thermal peak.
NASA Astrophysics Data System (ADS)
Heinze, T.; Budler, J.; Weigand, M.; Kemna, A.
2017-12-01
Water content distribution in the ground is essential for hazard analysis during monitoring of landslide prone hills. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. While this applies in many scenarios, sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. The smoothness constraint is reduced along layer boundaries identified using seismic data. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. We find a similar behavior over a broad range of models and depths. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.
Mueller, Amy V; Hemond, Harold F
2013-12-15
A novel artificial neural network (ANN) architecture is proposed which explicitly incorporates a priori system knowledge, i.e., relationships between output signals, while preserving the unconstrained non-linear function estimator characteristics of the traditional ANN. A method is provided for architecture layout, disabling training on a subset of neurons, and encoding system knowledge into the neuron structure. The novel architecture is applied to raw readings from a chemical sensor multi-probe (electric tongue), comprised of off-the-shelf ion selective electrodes (ISEs), to estimate individual ion concentrations in solutions at environmentally relevant concentrations and containing environmentally representative ion mixtures. Conductivity measurements and the concept of charge balance are incorporated into the ANN structure, resulting in (1) removal of estimation bias typically seen with use of ISEs in mixtures of unknown composition and (2) improvement of signal estimation by an order of magnitude or more for both major and minor constituents relative to use of ISEs as stand-alone sensors and error reduction by 30-50% relative to use of standard ANN models. This method is suggested as an alternative to parameterization of traditional models (e.g., Nikolsky-Eisenman), for which parameters are strongly dependent on both analyte concentration and temperature, and to standard ANN models which have no mechanism for incorporation of system knowledge. Network architecture and weighting are presented for the base case where the dot product can be used to relate ion concentrations to both conductivity and charge balance as well as for an extension to log-normalized data where the model can no longer be represented in this manner. While parameterization in this case study is analyte-dependent, the architecture is generalizable, allowing application of this method to other environmental problems for which mathematical constraints can be explicitly stated. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study analyses the effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model's discharge estimates. Prediction uncertainty bounds are derived using the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation (at a single station within the catchment) and a precipitation factor FPi. Thus, these factors provide a simplified representation of the spatial variation of precipitation, specifically the shape of the functional relationship between precipitation and height. In the absence of information about appropriate values of the precipitation factors FPi, these are estimated through standard calibration procedures. The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. Monte Carlo samples of the model output are obtained by randomly varying the model parameters within their feasible ranges. In the first experiment, the precipitation factors FPi are considered unknown and thus included in the sampling process. The total number of unknown parameters in this case is 16. In the second experiment, precipitation factors FPi are estimated a priori, by means of a long term water balance between observed discharge at the catchment outlet, evapotranspiration estimates and observed precipitation. In this case, the number of unknown parameters reduces to 11. The feasible ranges assigned to the precipitation factors in the first experiment are slightly wider than the range of fixed precipitation factors used in the second experiment. The mean squared error of the Box-Cox transformed discharge during the calibration period is used for the evaluation of the goodness of fit of the model realizations. GLUE-type uncertainty bounds during the verification period are derived at the probability levels p=85%, 90% and 95%. Results indicate that, as expected, prediction uncertainty bounds indeed change if precipitation factors FPi are estimated a priori rather than being allowed to vary, but that this change is not dramatic. Firstly, the width of the uncertainty bounds at the same probability level only slightly reduces compared to the case where precipitation factors are allowed to vary. Secondly, the ability to enclose the observations improves, but the decrease in the fraction of outliers is not significant. These results are probably due to the narrow range of variability allowed to the precipitation factors FPi in the first experiment, which implies that although they indicate the shape of the functional relationship between precipitation and height, the magnitude of precipitation estimates were mainly determined by the magnitude of the observations at the available raingauge. It is probable that the situation where no prior information is available on the realistic ranges of variation of the precipitation factors, and the inclusion of precipitation data uncertainty, would have led to a different conclusion. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
VLBI-SLR Combination Solution Using GEODYN
NASA Technical Reports Server (NTRS)
MacMillan, Dan; Pavlis, Despina; Lemoine, Frank; Chinn, Douglas; Rowlands, David
2010-01-01
We would like to generate a multi-technique solution combining all of the geodetic techniques (VLBI, SLR, GPS, and DORIS) using the same software and using the same a priori models. Here we use GEODYN software and consider only the VLBI-SLR combination. Here we report initial results of our work on the combination. We first performed solutions with GEODYN using only VLBI data and found that VLBI EOP solution results produced with GEODYN agree with results using CALC/SOLVE at the 1-sigma level. We then combined the VLBI normal equations in GEODYN with weekly SLR normal equations for the period 2007-2008. Agreement of estimated Earth orientation parameters with IERS C04 were not significantly different for the VLBI-only, SLR-only, and VLBI+SLR solutions
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu
2017-11-01
In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
The effect of directivity in a PSHA framework
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2012-09-01
We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.
NASA Technical Reports Server (NTRS)
Philip, Sajeev; Johnson, Matthew S.
2018-01-01
Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric fluxes. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in terrestrial biospheric models having significant differences in the quantification of biospheric CO2 fluxes. Atmospheric transport models assimilating measured (in situ or space-borne) CO2 concentrations to estimate "top-down" fluxes, generally use these biospheric CO2 fluxes as a priori information. Most of the flux inversion estimates result in substantially different spatio-temporal posteriori estimates of regional and global biospheric CO2 fluxes. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission dedicated to accurately measure column CO2 (XCO2) allows for an improved understanding of global biospheric CO2 fluxes. OCO-2 provides much-needed CO2 observations in data-limited regions facilitating better global and regional estimates of "top-down" CO2 fluxes through inversion model simulations. The specific objectives of our research are to: 1) conduct GEOS-Chem 4D-Var assimilation of OCO-2 observations, using several state-of-the-science biospheric CO2 flux models as a priori information, to better constrain terrestrial CO2 fluxes, and 2) quantify the impact of different biospheric model prior fluxes on OCO-2-assimilated a posteriori CO2 flux estimates. Here we present our assessment of the importance of these a priori fluxes by conducting Observing System Simulation Experiments (OSSE) using simulated OCO-2 observations with known "true" fluxes.
Non-linear motions in reprocessed GPS station position time series
NASA Astrophysics Data System (ADS)
Rudenko, Sergei; Gendt, Gerd
2010-05-01
Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.
Polydisperse sphere packing in high dimensions, a search for an upper critical dimension
NASA Astrophysics Data System (ADS)
Morse, Peter; Clusel, Maxime; Corwin, Eric
2012-02-01
The recently introduced granocentric model for polydisperse sphere packings has been shown to be in good agreement with experimental and simulational data in two and three dimensions. This model relies on two effective parameters that have to be estimated from experimental/simulational results. The non-trivial values obtained allow the model to take into account the essential effects of correlations in the packing. Once these parameters are set, the model provides a full statistical description of a sphere packing for a given polydispersity. We investigate the evolution of these effective parameters with the spatial dimension to see if, in analogy with the upper critical dimension in critical phenomena, there exists a dimension above which correlations become irrelevant and the model parameters can be fixed a priori as a function of polydispersity. This would turn the model into a proper theory of polydisperse sphere packings at that upper critical dimension. We perform infinite temperature quench simulations of frictionless polydisperse sphere packings in dimensions 2-8 using a parallel algorithm implemented on a GPGPU. We analyze the resulting packings by implementing an algorithm to calculate the additively weighted Voronoi diagram in arbitrary dimension.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
A priori Estimates for 3D Incompressible Current-Vortex Sheets
NASA Astrophysics Data System (ADS)
Coulombel, J.-F.; Morando, A.; Secchi, P.; Trebeschi, P.
2012-04-01
We consider the free boundary problem for current-vortex sheets in ideal incompressible magneto-hydrodynamics. It is known that current-vortex sheets may be at most weakly (neutrally) stable due to the existence of surface waves solutions to the linearized equations. The existence of such waves may yield a loss of derivatives in the energy estimate of the solution with respect to the source terms. However, under a suitable stability condition satisfied at each point of the initial discontinuity and a flatness condition on the initial front, we prove an a priori estimate in Sobolev spaces for smooth solutions with no loss of derivatives. The result of this paper gives some hope for proving the local existence of smooth current-vortex sheets without resorting to a Nash-Moser iteration. Such result would be a rigorous confirmation of the stabilizing effect of the magnetic field on Kelvin-Helmholtz instabilities, which is well known in astrophysics.
Birznieks, Ingvars; Redmond, Stephen J.
2015-01-01
Dexterous manipulation is not possible without sensory information about object properties and manipulative forces. Fundamental neuroscience has been unable to demonstrate how information about multiple stimulus parameters may be continuously extracted, concurrently, from a population of tactile afferents. This is the first study to demonstrate this, using spike trains recorded from tactile afferents innervating the monkey fingerpad. A multiple-regression model, requiring no a priori knowledge of stimulus-onset times or stimulus combination, was developed to obtain continuous estimates of instantaneous force and torque. The stimuli consisted of a normal-force ramp (to a plateau of 1.8, 2.2, or 2.5 N), on top of which −3.5, −2.0, 0, +2.0, or +3.5 mNm torque was applied about the normal to the skin surface. The model inputs were sliding windows of binned spike counts recorded from each afferent. Models were trained and tested by 15-fold cross-validation to estimate instantaneous normal force and torque over the entire stimulation period. With the use of the spike trains from 58 slow-adapting type I and 25 fast-adapting type I afferents, the instantaneous normal force and torque could be estimated with small error. This study demonstrated that instantaneous force and torque parameters could be reliably extracted from a small number of tactile afferent responses in a real-time fashion with stimulus combinations that the model had not been exposed to during training. Analysis of the model weights may reveal how interactions between stimulus parameters could be disentangled for complex population responses and could be used to test neurophysiologically relevant hypotheses about encoding mechanisms. PMID:25948866
The Order-Restricted Association Model: Two Estimation Algorithms and Issues in Testing
ERIC Educational Resources Information Center
Galindo-Garre, Francisca; Vermunt, Jeroen K.
2004-01-01
This paper presents a row-column (RC) association model in which the estimated row and column scores are forced to be in agreement with a priori specified ordering. Two efficient algorithms for finding the order-restricted maximum likelihood (ML) estimates are proposed and their reliability under different degrees of association is investigated by…
Catastrophic photometric redshift errors: Weak-lensing survey requirements
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Reflectance from images: a model-based approach for human faces.
Fuchs, Martin; Blanz, Volker; Lensch, Hendrik; Seidel, Hans-Peter
2005-01-01
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.
Guidelines for a priori grouping of species in hierarchical community models
Pacifici, Krishna; Zipkin, Elise; Collazo, Jaime; Irizarry, Julissa I.; DeWan, Amielle A.
2014-01-01
Recent methodological advances permit the estimation of species richness and occurrences for rare species by linking species-level occurrence models at the community level. The value of such methods is underscored by the ability to examine the influence of landscape heterogeneity on species assemblages at large spatial scales. A salient advantage of community-level approaches is that parameter estimates for data-poor species are more precise as the estimation process borrows from data-rich species. However, this analytical benefit raises a question about the degree to which inferences are dependent on the implicit assumption of relatedness among species. Here, we assess the sensitivity of community/group-level metrics, and individual-level species inferences given various classification schemes for grouping species assemblages using multispecies occurrence models. We explore the implications of these groupings on parameter estimates for avian communities in two ecosystems: tropical forests in Puerto Rico and temperate forests in northeastern United States. We report on the classification performance and extent of variability in occurrence probabilities and species richness estimates that can be observed depending on the classification scheme used. We found estimates of species richness to be most precise and to have the best predictive performance when all of the data were grouped at a single community level. Community/group-level parameters appear to be heavily influenced by the grouping criteria, but were not driven strictly by total number of detections for species. We found different grouping schemes can provide an opportunity to identify unique assemblage responses that would not have been found if all of the species were analyzed together. We suggest three guidelines: (1) classification schemes should be determined based on study objectives; (2) model selection should be used to quantitatively compare different classification approaches; and (3) sensitivity of results to different classification approaches should be assessed. These guidelines should help researchers apply hierarchical community models in the most effective manner.
An a priori model for the reduction of nutation observations: KSV(1994.3) nutation series
NASA Technical Reports Server (NTRS)
Herring, T. A.
1995-01-01
We discuss the formulation of a new nutation series to be used in the reduction of modern space geodetic data. The motivation for developing such a series is to develop a nutation series that has smaller short period errors than the IAU 1980 nutation series and to provide a series that can be used with techniques such as the Global Positioning System (GPS) that have sensitivity to nutations but can directly separate the effects of nutations from errors in the dynamical force models that effect the satellite orbits. A modern nutation series should allow the errors in the force models for GPS to be better understood. The series is constructed by convolving the Kinoshita and Souchay rigid Earth nutation series with an Earth response function whose parameters are partly based on geophysical models of the Earth and partly estimated from a long series (1979-1993) of very long baseline interferometry (VLBI) estimates of nutation angles. Secular rates of change of the nutation angles to represent corrections to the precession constant and a secular change of the obliquity of the ecliptic are included in the theory. Time dependent amplitudes of the Free Core Nutation (FCN) that is most likely excited by variations in atmospheric pressure are included when the geophysical parameters are estimated. The complex components of the prograde annual nutation are estimated simultaneously with the geophysical parameters because of the large contribution to the nutation from the S(sub 1) atmospheric tide. The weighted root mean square (WRMS) scatter of the nutation angle estimates about this new model are 0.32 mas and the largest correction to the series when the amplitudes of the ten largest nutations are estimated is 0.18 +/- 0.03 mas for the in phase component of the prograde 18. 6 year nutation.
Predicting the size of individual and group differences on speeded cognitive tasks.
Chen, Jing; Hale, Sandra; Myerson, Joel
2007-06-01
An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthikeyan, R.; Tellier, R. L.; Hebert, A.
2006-07-01
The Coolant Void Reactivity (CVR) is an important safety parameter that needs to be estimated at the design stage of a nuclear reactor. It helps to have an a priori knowledge of the behavior of the system during a transient initiated by the loss of coolant. In the present paper, we have attempted to estimate the CVR for a CANDU New Generation (CANDU-NG) lattice, as proposed at an early stage of the Advanced CANDU Reactor (ACR) development. We have attempted to estimate the CVR with development version of the code DRAGON, using the method of characteristics. DRAGON has several advancedmore » self-shielding models incorporated in it, each of them compatible with the method of characteristics. This study will bring to focus the performance of these self-shielding models, especially when there is voiding of such a tight lattice. We have also performed assembly calculations in 2 x 2 pattern for the CANDU-NG fuel, with special emphasis on checkerboard voiding. The results obtained have been validated against Monte Carlo codes MCNP5 and TRIPOLI-4.3. (authors)« less
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
Choosing the Allometric Exponent in Covariate Model Building.
Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B
2018-04-27
Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.
Restoration of recto-verso colour documents using correlated component analysis
NASA Astrophysics Data System (ADS)
Tonazzini, Anna; Bedini, Luigi
2013-12-01
In this article, we consider the problem of removing see-through interferences from pairs of recto-verso documents acquired either in grayscale or RGB modality. The see-through effect is a typical degradation of historical and archival documents or manuscripts, and is caused by transparency or seeping of ink from the reverse side of the page. We formulate the problem as one of separating two individual texts, overlapped in the recto and verso maps of the colour channels through a linear convolutional mixing operator, where the mixing coefficients are unknown, while the blur kernels are assumed known a priori or estimated off-line. We exploit statistical techniques of blind source separation to estimate both the unknown model parameters and the ideal, uncorrupted images of the two document sides. We show that recently proposed correlated component analysis techniques overcome the already satisfactory performance of independent component analysis techniques and colour decorrelation, when the two texts are even sensibly correlated.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-11-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-04-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Direct recovery of mean gravity anomalies from satellite to satellite tracking
NASA Technical Reports Server (NTRS)
Hajela, D. P.
1974-01-01
The direct recovery was investigated of mean gravity anomalies from summed range rate observations, the signal path being ground station to a geosynchronous relay satellite to a close satellite significantly perturbed by the short wave features of the earth's gravitational field. To ensure realistic observations, these were simulated with the nominal orbital elements for the relay satellite corresponding to ATS-6, and for two different close satellites (one at about 250 km height, and the other at about 900 km height) corresponding to the nominal values for GEOS-C. The earth's gravitational field was represented by a reference set of potential coefficients up to degree and order 12, considered as known values, and by residual gravity anomalies obtained by subtracting the anomalies, implied by the potential coefficients, from their terrestrial estimates. It was found that gravity anomalies could be recovered from strong signal without using any a-priori terrestrial information, i.e. considering their initial values as zero and also assigning them a zero weight matrix. While recovering them from weak signal, it was necessary to use the a-priori estimate of the standard deviation of the anomalies to form their a-priori diagonal weight matrix.
PEITH(Θ): perfecting experiments with information theory in Python with GPU support.
Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane
2018-04-01
Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.
Galileo Declassified: IOV Spacecraft Metadata and Its Impact on Precise Orbit Determination
NASA Astrophysics Data System (ADS)
Dilssner, Florian; Schönemann, Erik; Springer, Tim; Flohrer, Claudia; Enderle, Werner
2017-04-01
In December 2016, shortly after the declaration of Galileo Initial Services, the European GNSS Agency (GSA) disclosed Galileo spacecraft metadata relevant to precise orbit determination (POD), such as antenna phase center parameters, dimensions of the solar panels and the main body, specularity and reflectivity coefficients for the surface materials, yaw attitude steering law, and signal group delays. The metadata relates to the first four operational Galileo satellites, known as the In-Orbit Validation (IOV) satellites, and is publicly available through the European GNSS Service Center (GSC) web site. One of the dataset's major benefits is that it includes nearly all information about the satellites' surface properties needed to develop a physically meaningful analytical solar radiation pressure (SRP) macro model, or "box-wing" (BW) model. Such a BW model for the IOV spacecraft has now been generated for use in NAPEOS, the European Space Operation Centre's (ESOC's) main geodetic software package for POD. The model represents the satellite as a simple six-sided box with two attached panels, or "wings", and allows for the a priori computation of the direct and indirect (Earth albedo) SRP force. Further valuable parameters of the metadata set are the IOV navigation antenna (NAVANT) phase center offsets (PCOs) and variations (PCVs) inferred from pre-launch anechoic chamber measurements. In this work, we report on the validation of the Galileo IOV metadata and its impact on POD, an activity ESOC has been deeply committed to since the launch of the first Galileo experimental satellite, GIOVE-A, in 2005. We first reanalyze the full history of Galileo tracking data the global International GNSS Service (IGS) network has collected since 2012. We generate orbit and clock solutions based on the widely used Empirical CODE Orbit Model (ECOM) with and without the IOV a priori BW model. For the satellite antennas, we apply the new as well as the standard IGS-recommended phase center corrections ("igs08.atx"). Results are evaluated according to several internal and external metrics, such as carrier phase residuals, satellite laser ranging (SLR) data, satellite clock residuals, day-to-day orbit overlap differences, and narrow-lane (NL) double differences as a measure of the quality of the unresolved phase ambiguity estimates. We demonstrate that the use of the new IOV BW and antenna models brings substantial improvements over the standard approach without the a priori model and with igs08.atx. Particularly striking here is the reduction of the SLR residual RMS by a factor of two to three as well as the five-to-ten-times-tighter distribution of the NL residuals - an important aspect for standard NL integer ambiguity resolution. During eclipse season, when the sun's elevation angle is small, the combination of the standard ECOM with the BW model even outperforms the enhanced ECOM ("ECOM2"). Moreover, we elaborate on the Galileo IOV yaw attitude scheme and evaluate noon- and midnight-turn maneuvers by way of reverse point positioning (RPP). The RPP technique takes advantage of the approximately 17 cm horizontal offset of the IOV NAVANT from the spacecraft's yaw axis to estimate the yaw angle. Finally, we estimate the NAVANT's PCO and PCV parameters by utilizing multiple years of IGS tracking data and compare them against the chamber calibration values.
NASA Astrophysics Data System (ADS)
Naderi, E.; Khorasani, K.
2018-02-01
In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.
Compensatory effects of recruitment and survival when amphibian populations are perturbed by disease
Muths, E.; Scherer, R. D.; Pilliod, D.S.
2011-01-01
The need to increase our understanding of factors that regulate animal population dynamics has been catalysed by recent, observed declines in wildlife populations worldwide. Reliable estimates of demographic parameters are critical for addressing basic and applied ecological questions and understanding the response of parameters to perturbations (e.g. disease, habitat loss, climate change). However, to fully assess the impact of perturbation on population dynamics, all parameters contributing to the response of the target population must be estimated. We applied the reverse-time model of Pradel in Program mark to 6years of capture-recapture data from two populations of Anaxyrus boreas (boreal toad) populations, one with disease and one without. We then assessed a priori hypotheses about differences in survival and recruitment relative to local environmental conditions and the presence of disease. We further explored the relative contribution of survival probability and recruitment rate to population growth and investigated how shifts in these parameters can alter population dynamics when a population is perturbed. High recruitment rates (0??41) are probably compensating for low survival probability (range 0??51-0??54) in the population challenged by an emerging pathogen, resulting in a relatively slow rate of decline. In contrast, the population with no evidence of disease had high survival probability (range 0??75-0??78) but lower recruitment rates (0??25). Synthesis and applications.We suggest that the relationship between survival and recruitment may be compensatory, providing evidence that populations challenged with disease are not necessarily doomed to extinction. A better understanding of these interactions may help to explain, and be used to predict, population regulation and persistence for wildlife threatened with disease. Further, reliable estimates of population parameters such as recruitment and survival can guide the formulation and implementation of conservation actions such as repatriations or habitat management aimed to improve recruitment. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.
Fienen, Michael N.; Doherty, John E.; Hunt, Randall J.; Reeves, Howard W.
2010-01-01
The importance of monitoring networks for resource-management decisions is becoming more recognized, in both theory and application. Quantitative computer models provide a science-based framework to evaluate the efficacy and efficiency of existing and possible future monitoring networks. In the study described herein, two suites of tools were used to evaluate the worth of new data for specific predictions, which in turn can support efficient use of resources needed to construct a monitoring network. The approach evaluates the uncertainty of a model prediction and, by using linear propagation of uncertainty, estimates how much uncertainty could be reduced if the model were calibrated with addition information (increased a priori knowledge of parameter values or new observations). The theoretical underpinnings of the two suites of tools addressing this technique are compared, and their application to a hypothetical model based on a local model inset into the Great Lakes Water Availability Pilot model are described. Results show that meaningful guidance for monitoring network design can be obtained by using the methods explored. The validity of this guidance depends substantially on the parameterization as well; hence, parameterization must be considered not only when designing the parameter-estimation paradigm but also-importantly-when designing the prediction-uncertainty paradigm.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
NASA Astrophysics Data System (ADS)
Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin
2018-03-01
Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
NASA Astrophysics Data System (ADS)
Sege, J.; Li, Y.; Chang, C. F.; Chen, J.; Chen, Z.; Rubin, Y.; Li, X.; Hehua, Z.; Wang, C.; Osorio-Murillo, C. A.
2015-12-01
This study will develop a numerical model to characterize the perturbation of local groundwater systems by underground tunnel construction. Tunnels and other underground spaces act as conduits that remove water from the surrounding aquifer, and may lead to drawdown of the water table. Significant declines in water table elevation can cause environmental impacts by altering root zone soil moisture and changing inflows to surface waters. Currently, it is common to use analytical solutions to estimate groundwater fluxes through tunnel walls. However, these solutions often neglect spatial and temporal heterogeneity in aquifer parameters and system stresses. Some heterogeneous parameters, such as fracture densities, can significantly affect tunnel inflows. This study will focus on numerical approaches that incorporate heterogeneity across a range of scales. Time-dependent simulations will be undertaken to compute drawdown at various stages of excavation, and to model water table recovery after low-conductivity liners are applied to the tunnel walls. This approach will assist planners in anticipating environmental impacts to local surface waters and vegetation, and in computing the amount of tunnel inflow reduction required to meet environmental targets. The authors will also focus on managing uncertainty in model parameters. For greater planning applicability, extremes of a priori parameter ranges will be explored in order to anticipate best- and worst-case scenarios. For calibration and verification purposes, the model will be applied to a completed tunnel project in Mount Mingtang, China, where tunnel inflows were recorded throughout the construction process.
Parameterization of spectral baseline directly from short echo time full spectra in 1 H-MRS.
Lee, Hyeong Hun; Kim, Hyeonjin
2017-09-01
To investigate the feasibility of parameterizing macromolecule (MM) resonances directly from short echo time (TE) spectra rather than pre-acquired, T 1 -weighted, metabolite-nulled spectra in 1 H-MRS. Initial line parameters for metabolites and MMs were set for rat brain spectra acquired at 9.4 Tesla upon a priori knowledge. Then, MM line parameters were optimized over several steps with fixed metabolite line parameters. The proposed method was tested by estimating metabolite T 1 . The results were compared with those obtained with two existing methods. Furthermore, subject-specific, spin density-weighted, MM model spectra were generated according to the MM line parameters from the proposed method for metabolite quantification. The results were compared with those obtained with subject-specific, T 1 -weighted, metabolite-nulled spectra. The metabolite T 1 were largely in close agreement among the three methods. The spin density-weighted MM resonances from the proposed method were in good agreement with the T 1 -weighted, metabolite-nulled spectra except for the MM resonance at ∼3.2 ppm. The metabolite concentrations estimated by incorporating these two different spectral baselines were also in good agreement except for several metabolites with resonances at ∼3.2 ppm. The MM parameterization directly from short-TE spectra is feasible. Further development of the method may allow for better representation of spectral baseline with negligible T 1 -weighting. Magn Reson Med 78:836-847, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Informational Entropy and Bridge Scour Estimation under Complex Hydraulic Scenarios
NASA Astrophysics Data System (ADS)
Pizarro, Alonso; Link, Oscar; Fiorentino, Mauro; Samela, Caterina; Manfreda, Salvatore
2017-04-01
Bridges are important for society because they allow social, cultural and economic connectivity. Flood events can compromise the safety of bridge piers up to the complete collapse. The Bridge Scour phenomena has been described by empirical formulae deduced from hydraulic laboratory experiments. The range of applicability of such models is restricted by the specific hydraulic conditions or flume geometry used for their derivation (e.g., water depth, mean flow velocity, pier diameter and sediment properties). We seek to identify a general formulation able to capture the main dynamic of the process in order to cover a wide range of hydraulic and geometric configuration, allowing to extend our analysis in different contexts. Therefore, exploiting the Principle of Maximum Entropy (POME) and applying it on the recently proposed dimensionless Effective flow work, W*, we derived a simple model characterized by only one parameter. The proposed Bridge Scour Entropic (BRISENT) model shows good performances under complex hydraulic conditions as well as under steady-state flow. Moreover, the model was able to capture the evolution of scour in several hydraulic configurations even if the model contains only one parameter. Furthermore, results show that the model parameter is controlled by the geometric configurations of the experiment. This offers a possible strategy to obtain a priori model parameter calibration. The BRISENT model represents a good candidate for estimating the time-dependent scour depth under complex hydraulic scenarios. The authors are keen to apply this idea for describing the scour behavior during a real flood event. Keywords: Informational entropy, Sediment transport, Bridge pier scour, Effective flow work.
Investigation of risk factors for mortality in aged guide dogs: A retrospective cohort study.
Hoummady, S; Hua, J; Muller, C; Pouchelon, J L; Blondot, M; Gilbert, C; Desquilbet, L
2016-09-15
The overall median lifespan of domestic dogs has been estimated to 9-12 years, but little is known about risk factors for mortality in aged and a priori healthy dogs. The objective of this retrospective cohort study was to determine which characteristics are associated with mortality in aged and a priori healthy guide dogs, in a retrospective cohort study of 116 guide dogs followed from a systematic geriatric examination at the age of 8-10 years old. A geriatric grid collected the clinical data and usual biological parameters were measured at the time of examination. Univariate (Kaplan-Meier estimates) and multivariable (Cox proportional hazard model) survival analyses were used to assess the associations with time to all-cause death. The majority of dogs were Golden Retrievers (n=48) and Labrador Retrievers (n=27). Median age at geriatric examination was 8.9 years. A total of 76 dogs died during follow-up, leading to a median survival time from geriatric examination of 4.4 years. After adjustment for demographic and biological variables, an increased alanine amionotransferase level (adjusted Hazard Ratio (adjusted HR), 6.2; 95% confidence interval [95%CI], 2.0-19.0; P<0.01), presenting skin nodules (adjusted HR, 1.9; 95% CI, 1.0-3.4; P=0.04), and not being a Labrador Retriever (adjusted HR, 3.3; 95%CI, 1.4-10; P<0.01) were independently associated with a shorter time to death. This study documents independent associations of alanine aminotransferase level, skin nodules and breed with mortality in aged guide dogs. These results may be useful for preventive medical care when conducting a geriatric examination in working dogs. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantitative Rheological Model Selection
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2014-11-01
The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.
An a priori solar radiation pressure model for the QZSS Michibiki satellite
NASA Astrophysics Data System (ADS)
Zhao, Qile; Chen, Guo; Guo, Jing; Liu, Jingnan; Liu, Xianglin
2018-02-01
It has been noted that the satellite laser ranging (SLR) residuals of the Quasi-Zenith Satellite System (QZSS) Michibiki satellite orbits show very marked dependence on the elevation angle of the Sun above the orbital plane (i.e., the β angle). It is well recognized that the systematic error is caused by mismodeling of the solar radiation pressure (SRP). Although the error can be reduced by the updated ECOM SRP model, the orbit error is still very large when the satellite switches to orbit-normal (ON) orientation. In this study, an a priori SRP model was established for the QZSS Michibiki satellite to enhance the ECOM model. This model is expressed in ECOM's D, Y, and B axes (DYB) using seven parameters for the yaw-steering (YS) mode, and additional three parameters are used to compensate the remaining modeling deficiencies, particularly the perturbations in the Y axis, based on a redefined DYB for the ON mode. With the proposed a priori model, QZSS Michibiki's precise orbits over 21 months were determined. SLR validation indicated that the systematic β -angle-dependent error was reduced when the satellite was in the YS mode, and better than an 8-cm root mean square (RMS) was achieved. More importantly, the orbit quality was also improved significantly when the satellite was in the ON mode. Relative to ECOM and adjustable box-wing model, the proposed SRP model showed the best performance in the ON mode, and the RMS of the SLR residuals was better than 15 cm, which was a two times improvement over the ECOM without a priori model used, but was still two times worse than the YS mode.
Troeller, A; Soehn, M; Yan, D
2012-06-01
Introducing an extended, phenomenological, generalized equivalent uniform dose (eEUD) that incorporates multiple volume-effect parameters for different dose-ranges. The generalized EUD (gEUD) was introduced as an estimate of the EUD that incorporates a single, tissue-specific parameter - the volume-effect-parameter (VEP) 'a'. As a purely phenomenological concept, its radio-biological equivalency to a given inhomogeneous dose distribution is not a priori clear and mechanistic models based on radio-biological parameters are assumed to better resemble the underlying biology. However, for normal organs mechanistic models are hard to derive, since the structural organization of the tissue plays a significant role. Consequently, phenomenological approaches might be especially useful in order to describe dose-response for normal tissues. However, the single parameter used to estimate the gEUD may not suffice in accurately representing more complex biological effects that have been discussed in the literature. For instance, radio-biological parameters and hence the effects of fractionation are known to be dose-range dependent. Therefore, we propose an extended phenomenological eEUD formula that incorporates multiple VEPs accounting for dose-range dependency. The eEUD introduced is a piecewise polynomial expansion of the gEUD formula. In general, it allows for an arbitrary number of VEPs, each valid for a certain dose-range. We proved that the formula fulfills required mathematical and physical criteria such as invertibility of the underlying dose-effect and continuity in dose. Furthermore, it contains the gEUD as a special case, if all VEPs are equal to 'a' from the gEUD model. The eEUD is a concept that expands the gEUD such that it can theoretically represent dose-range dependent effects. Its practicality, however, remains to be shown. As a next step, this will be done by estimating the eEUD from patient data using maximum-likelihood based NTCP modelling in the same way it is commonly done for the gEUD. © 2012 American Association of Physicists in Medicine.
Earth's Outer Core Properties Estimated Using Bayesian Inversion of Normal Mode Eigenfrequencies
NASA Astrophysics Data System (ADS)
Irving, J. C. E.; Cottaar, S.; Lekic, V.
2016-12-01
The outer core is arguably Earth's most dynamic region, and consists of an iron-nickel liquid with an unknown combination of lighter alloying elements. Frequencies of Earth's normal modes provide the strongest constraints on the radial profiles of compressional wavespeed, VΦ, and density, ρ, in the outer core. Recent great earthquakes have yielded new normal mode measurements; however, mineral physics experiments and calculations are often compared to the Preliminary reference Earth model (PREM), which is 35 years old and does not provide uncertainties. Here we investigate the thermo-elastic properties of the outer core using Earth's free oscillations and a Bayesian framework. To estimate radial structure of the outer core and its uncertainties, we choose to exploit recent datasets of normal mode centre frequencies. Under the self-coupling approximation, centre frequencies are unaffected by lateral heterogeneities in the Earth, for example in the mantle. Normal modes are sensitive to both VΦ and ρ in the outer core, with each mode's specific sensitivity depending on its eigenfunctions. We include a priori bounds on outer core models that ensure compatibility with measurements of mass and moment of inertia. We use Bayesian Monte Carlo Markov Chain techniques to explore different choices in parameterizing the outer core, each of which represents different a priori constraints. We test how results vary (1) assuming a smooth polynomial parametrization, (2) allowing for structure close to the outer core's boundaries, (3) assuming an Equation-of-State and adiabaticity and inverting directly for thermo-elastic parameters. In the second approach we recognize that the outer core may have distinct regions close to the core-mantle and inner core boundaries and investigate models which parameterize the well mixed outer core separately from these two layers. In the last approach we seek to map the uncertainties directly into thermo-elastic parameters including the bulk modulus, its pressure derivative, and molar mass and volume, with particular attention paid to the (inherent) trade-offs between the different coefficients. We discuss our results in terms of added uncertainty to the light element composition of the outer core and the potential existence of anomalous structure near the outer core's boundaries.
First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.
2013-08-01
We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Haslauer, C. P.; Cirpka, O. A.; Vesselinov, V. V.
2016-12-01
It is desirable to predict the shape of breakthrough curves downgradient of a solute source from subsurface structural parameters (as in the small-perturbation macrodispersion theory) both for realistically heterogeneous fields, and at early time, before any sort of Fickian model is applicable. Using a combination of a priori knowledge, large-scale Monte Carlo simulation, and regression techniques, we have developed closed-form predictive expressions for pre- and post-Fickian flux-weighted solute breakthrough curves as a function of distance from the source (in integral scales) and variance of the log hydraulic conductivity field. Using the ensemble of Monte Carlo realizations, we have simultaneously computed error envelopes for the estimated flux-weighted breakthrough, and for the divergence of point breakthrough curves from the flux-weighted average, as functions of the predictive parameters. We have also obtained implied late-time macrodispersion coefficients for highly heterogeneous environments from the breakthrough statistics. This analysis is relevant for the modelling of reactive as well as conservative transport, since for many kinetic sorption and decay reactions, Laplace-domain modification of the breakthrough curve for conservative solute produces the correct curve for the reactive system.
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
Estimation and Application of Ecological Memory Functions in Time and Space
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Dawson, A.
2017-12-01
A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological processes.
Rational Design of Molecular Gelator - Solvent Systems Guided by Solubility Parameters
NASA Astrophysics Data System (ADS)
Lan, Yaqi
Self-assembled architectures, such as molecular gels, have attracted wide interest among chemists, physicists and engineers during the past decade. However, the mechanism behind self-assembly remains largely unknown and no capability exists to predict a priori whether a small molecule will gelate a specific solvent or not. The process of self-assembly, in molecular gels, is intricate and must balance parameters influencing solubility and those contrasting forces that govern epitaxial growth into axially symmetric elongated aggregates. Although the gelator-gelator interactions are of paramount importance in understanding gelation, the solvent-gelator specific (i.e., H-bonding) and nonspecific (dipole-dipole, dipole-induced and instantaneous dipole induced forces) intermolecular interactions are equally important. Solvent properties mediate the self-assembly of molecular gelators into their self-assembled fibrillar networks. Herein, solubility parameters of solvents, ranging from partition coefficients (logP), to Henry's law constants (HLC), to solvatochromic ET(30) parameters, to Kamlet-Taft parameters (beta, alpha and pi), to Hansen solubility parameters (deltap, deltad, deltah), etc., are correlated with the gelation ability of numerous classes of molecular gelators. Advanced solvent clustering techniques have led to the development of a priori tools that can identify the solvents that will be gelled and not gelled by molecular gelators. These tools will greatly aid in the development of novel gelators without solely relying on serendipitous discoveries.
W-phase estimation of first-order rupture distribution for megathrust earthquakes
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Geometric estimation of intestinal contraction for motion tracking of video capsule endoscope
NASA Astrophysics Data System (ADS)
Mi, Liang; Bao, Guanqun; Pahlavan, Kaveh
2014-03-01
Wireless video capsule endoscope (VCE) provides a noninvasive method to examine the entire gastrointestinal (GI) tract, especially small intestine, where other endoscopic instruments can barely reach. VCE is able to continuously provide clear pictures in short fixed intervals, and as such researchers have attempted to use image processing methods to track the video capsule in order to locate the abnormalities inside the GI tract. To correctly estimate the speed of the motion of the endoscope capsule, the radius of the intestinal track must be known a priori. Physiological factors such as intestinal contraction, however, dynamically change the radius of the small intestine, which could bring large errors in speed estimation. In this paper, we are aiming to estimate the radius of the contracted intestinal track. First a geometric model is presented for estimating the radius of small intestine based on the black hole on endoscopic images. To validate our proposed model, a 3-dimentional virtual testbed that emulates the intestinal contraction is then introduced in details. After measuring the size of the black holes on the test images, we used our model to esimate the radius of the contracted intestinal track. Comparision between analytical results and the emulation model parameters has verified that our proposed method could preciously estimate the radius of the contracted small intestine based on endoscopic images.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
Comparative dynamics of avian communities across edges and interiors of North American ecoregions
Karanth, K.K.; Nichols, J.D.; Sauer, J.R.; Hines, J.E.
2006-01-01
Aim Based on a priori hypotheses, we developed predictions about how avian communities might differ at the edges vs. interiors of ecoregions. Specifically, we predicted lower species richness and greater local turnover and extinction probabilities for regional edges. We tested these predictions using North American Breeding Bird Survey (BBS) data across nine ecoregions over a 20-year time period. Location Data from 2238 BBS routes within nine ecoregions of the United States were used. Methods The estimation methods used accounted for species detection probabilities < 1. Parameter estimates for species richness, local turnover and extinction probabilities were obtained using the program COMDYN. We examined the difference in community-level parameters estimated from within exterior edges (the habitat interface between ecoregions), interior edges (the habitat interface between two bird conservation regions within the same ecoregion) and interior (habitat excluding interfaces). General linear models were constructed to examine sources of variation in community parameters for five ecoregions (containing all three habitat types) and all nine ecoregions (containing two habitat types). Results Analyses provided evidence that interior habitats and interior edges had on average higher bird species richness than exterior edges, providing some evidence of reduced species richness near habitat edges. Lower average extinction probabilities and turnover rates in interior habitats (five-region analysis) provided some support for our predictions about these quantities. However, analyses directed at all three response variables, i.e. species richness, local turnover, and local extinction probability, provided evidence of an interaction between habitat and region, indicating that the relationships did not hold in all regions. Main conclusions The overall predictions of lower species richness, higher local turnover and extinction probabilities in regional edge habitats, as opposed to interior habitats, were generally supported. However, these predicted tendencies did not hold in all regions.
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
NASA Astrophysics Data System (ADS)
Netusil, Noelwah R.; Kincaid, Michael; Chang, Heejun
2014-05-01
This study uses the hedonic price method to investigate the effect of five water quality parameters on the sale price of single-family residential properties in two urbanized watersheds in the Portland, Oregon-Vancouver, Washington metropolitan area. Water quality parameters include E. coli or fecal coliform, which can affect human health, decrease water clarity and generate foul odors; pH, dissolved oxygen, and stream temperature, which can impact fish and wildlife populations; and total suspended solids, which can affect water clarity, aquatic life, and aesthetics. Properties within ¼ mile, ½, mile, one mile, or more than one mile from Johnson Creek are estimated to experience an increase in sale price of 13.71%, 7.05%, 8.18%, and 3.12%, respectively, from a one mg/L increase in dissolved oxygen levels during the dry season (May-October). Estimates for a 100 count per 100 mL increase in E. coli during the dry season are -2.81% for properties within ¼ mile of Johnson Creek, -0.86% (½ mile), -1.19% (one mile), and -0.71% (greater than one mile). Results for properties in Burnt Bridge Creek include a significantly positive effect for a one mg/L increase in dissolved oxygen levels during the dry season for properties within ½ mile (4.49%), one mile (2.95%), or greater than one mile from the creek (3.17%). Results for other water quality parameters in Burnt Bridge Creek are generally consistent with a priori expectations. Restoration efforts underway in both study areas might be cost justified based on their estimated effect on property sale prices.
NASA Astrophysics Data System (ADS)
Heinze, Thomas; Möhring, Simon; Budler, Jasmin; Weigand, Maximilian; Kemna, Andreas
2017-04-01
Rainfall-triggered landslides are a latent danger in almost any place of the world. Due to climate change heavy rainfalls might occur more often, increasing the risk of landslides. With pore pressure as mechanical trigger, knowledge of water content distribution in the ground is essential for hazard analysis during monitoring of potentially dangerous rainfall events. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. This applies in many scenarios, as for example during infiltration of water without a clear saturation front. Sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, on the other hand, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. Here the standard smoothness constraint is reduced along layer boundaries identified using seismic data or other additional sources. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.
Stress distribution and topography of Tellus Regio, Venus
NASA Technical Reports Server (NTRS)
Williams, David R.; Greeley, Ronald
1989-01-01
The Tellus Regio area of Venus represents a subset of a narrow latitude band where Pioneer Venus Orbiter (PVO) altimetry data, line-of-sight (LOS) gravity data, and Venera 15/16 radar images have all been obtained with good resolution. Tellus Regio also has a wide variety of surface morphologic features, elevations ranging up to 2.5 km, and a relatively low LOS gravity anomaly. This area was therefore chosen in order to examine the theoretical stress distributions resulting from various models of compensation of the observed topography. These surface stress distributions are then compared with the surface morphology revealed in the Venera 15/16 radar images. Conclusions drawn from these comparisons will enable constraints to be put on various tectonic parameters relevant to Tellus Regio. The stress distribution is calculated as a function of the topography, the equipotential anomaly, and the assumed model parameters. The topography data is obtained from the PVO altimetry. The equipotential anomaly is estimated from the PVO LOS gravity data. The PVO LOS gravity represents the spacecraft accelerations due to mass anomalies within the planet. These accelerations are measured at various altitudes and angles to the local vertical and therefore do not lend themselves to a straightforward conversion. A minimum variance estimator of the LOS gravity data is calculated, taking into account the various spacecraft altitudes and LOS angles and using the measured PVO topography as an a priori constraint. This results in an estimated equivalent surface mass distribution, from which the equipotential anomaly is determined.
Ruder, Avima M; Hein, Misty J; Hopf, Nancy B; Waters, Martha A
2014-03-01
The objective of this analysis was to evaluate mortality among a cohort of 24,865 capacitor-manufacturing workers exposed to polychlorinated biphenyls (PCBs) at plants in Indiana, Massachusetts, and New York and followed for mortality through 2008. Cumulative PCB exposure was estimated using plant-specific job-exposure matrices. External comparisons to US and state-specific populations used standardized mortality ratios, adjusted for gender, race, age and calendar year. Among long-term workers employed 3 months or longer, within-cohort comparisons used standardized rate ratios and multivariable Poisson regression modeling. Through 2008, more than one million person-years at risk and 8749 deaths were accrued. Among long-term employees, all-cause and all-cancer mortality were not elevated; of the a priori outcomes assessed only melanoma mortality was elevated. Mortality was elevated for some outcomes of a priori interest among subgroups of long-term workers: all cancer, intestinal cancer and amyotrophic lateral sclerosis (women); melanoma (men); melanoma and brain and nervous system cancer (Indiana plant); and melanoma and multiple myeloma (New York plant). Standardized rates of stomach and uterine cancer and multiple myeloma mortality increased with estimated cumulative PCB exposure. Poisson regression modeling showed significant associations with estimated cumulative PCB exposure for prostate and stomach cancer mortality. For other outcomes of a priori interest--rectal, liver, ovarian, breast, and thyroid cancer, non-Hodgkin lymphoma, Alzheimer disease, and Parkinson disease--neither elevated mortality nor positive associations with PCB exposure were observed. Associations between estimated cumulative PCB exposure and stomach, uterine, and prostate cancer and myeloma mortality confirmed our previous positive findings. Published by Elsevier GmbH.
Ruder, Avima M.; Hein, Misty J.; Hopf, Nancy B.; Waters, Martha A.
2015-01-01
The objective of this analysis was to evaluate mortality among a cohort of 24,865 capacitor-manufacturing workers exposed to polychlorinated biphenyls (PCBs) at plants in Indiana, Massachusetts, and New York and followed for mortality through 2008. Cumulative PCB exposure was estimated using plant-specific job-exposure matrices. External comparisons to US and state-specific populations used standardized mortality ratios, adjusted for gender, race, age and calendar year. Among long-term workers employed 3 months or longer, within-cohort comparisons used standardized rate ratios and multivariable Poisson regression modeling. Through 2008, more than one million person-years at risk and 8749 deaths were accrued. Among long-term employees, all-cause and all-cancer mortality were not elevated; of the a priori outcomes assessed only melanoma mortality was elevated. Mortality was elevated for some outcomes of a priori interest among subgroups of long-term workers: all cancer, intestinal cancer and amyotrophic lateral sclerosis (women); melanoma (men); melanoma and brain and nervous system cancer (Indiana plant); and melanoma and multiple myeloma (New York plant). Standardized rates of stomach and uterine cancer and multiple myeloma mortality increased with estimated cumulative PCB exposure. Poisson regression modeling showed significant associations with estimated cumulative PCB exposure for prostate and stomach cancer mortality. For other outcomes of a priori interest – rectal, liver, ovarian, breast, and thyroid cancer, non-Hodgkin lymphoma, Alzheimer disease, and Parkinson disease – neither elevated mortality nor positive associations with PCB exposure were observed. Associations between estimated cumulative PCB exposure and stomach, uterine, and prostate cancer and myeloma mortality confirmed our previous positive findings. PMID:23707056
Garrard, Georgia E; McCarthy, Michael A; Vesk, Peter A; Radford, James Q; Bennett, Andrew F
2012-01-01
1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature. 2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable. 3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation. 4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation. 5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon. 6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain estimates with the same precision as those provided by the general model. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian
2015-11-01
We have developed a new method for the interpretation of gravity tensor data based on the generalized Tilt-depth method. Cooper (2011, 2012) extended the magnetic Tilt-depth method to gravity data. We take the gradient-ratio method of Cooper (2011, 2012) and modify it so that the source type does not need to be specified a priori. We develop the new method by generalizing the Tilt-depth method for depth estimation for different types of source bodies. The new technique uses only the three vertical tensor components of the full gravity tensor data observed or calculated at different height plane to estimate the depth of the buried bodies without a priori specification of their structural index. For severely noise-corrupted data, our method utilizes different upward continuation height data, which can effectively reduce the influence of noise. Theoretical simulations of the gravity source model with and without noise illustrate the ability of the method to provide source depth information. Additionally, the simulations demonstrate that the new method is simple, computationally fast and accurate. Finally, we apply the method using the gravity data acquired over the Humble Salt Dome in the USA as an example. The results show a good correspondence to the previous drilling and seismic interpretation results.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Performance evaluation of an importance sampling technique in a Jackson network
NASA Astrophysics Data System (ADS)
brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed
2014-03-01
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Lin, Y; Ghijsen, M T; Gao, H; Liu, N; Nalcioglu, O; Gulsen, G
2014-01-01
Fluorescence tomography (FT) is a promising molecular imaging technique that can spatially resolve both fluorophore concentration and lifetime parameters. However, recovered fluorophore parameters highly depend on the size and depth of the object due to the ill-posedness of the FT inverse problem. Structural a priori information from another high spatial resolution imaging modality has been demonstrated to significantly improve FT reconstruction accuracy. In this study, we have constructed a combined magnetic resonance imaging (MRI) and FT system for small animal imaging. A photo-multiplier tube (PMT) is used as the detector to acquire frequency domain FT measurements. This is the first MR-compatible time-resolved FT system that can reconstruct both fluorescence concentration and lifetime maps simultaneously. The performance of the hybrid system is evaluated with phantom studies. Two different fluorophores, Indocyanine Green (ICG) and 3-3′ Diethylthiatricarbocyanine Iodide (DTTCI), which have similar excitation and emission spectra but different lifetimes, are utilized. The fluorescence concentration and lifetime maps are both reconstructed with and without the structural a priori information obtained from MRI for comparison. We show that the hybrid system can accurately recover both fluorescence intensity and lifetime within 10% error for two 4.2 mm-diameter cylindrical objects embedded in a 38 mm-diameter cylindrical phantom when MRI structural a priori information is utilized. PMID:21753235
IGS preparations for the next reprocessing and ITRF
NASA Astrophysics Data System (ADS)
Griffiths, J.; Rebischung, P.; Garayt, B.; Ray, J.
2012-04-01
The International GNSS Service (IGS) is preparing for a second reanalysis of the full history of data collected by the global network using the latest models and methodologies. This effort is designed to obtain improved, consistent satellite orbits, station and satellite clocks, Earth orientation parameters (EOPs) and terrestrial frame products using the current IGS framework, IGS08/igs08.atx. It follows a successful first reprocessing campaign, which provided the IGS input to ITRF2008. Likewise, this second campaign (repro2) should provide the IGS contribution to the next ITRF. We will discuss the analysis standards adopted for repro2, including treatment of and mitigation against non-tidal loading effects, and improvements expected with respect to the first reprocessing campaign. International Earth Rotation and Reference Systems Service (IERS) Conventions of 2010 are expected to be implemented. Though, no improvements in the diurnal and semidiurnal EOP tide models will be made, so associated errors will remain. Adoption of new orbital force models and consistent handling of satellite attitude changes are expected to improve IGS clock and orbit products. A priori Earth-reflected radiation pressure models should nearly eliminate the ~2.5 cm orbit radial bias previously observed using laser ranging methods. Also, a priori modeling of radiation forces exerted in signal transmission should improve the orbit products. And use of consistent satellite attitude models should help with satellite clock estimation during Earth and Moon eclipses. Improvements of the terrestrial frame products are expected from, for example, the inclusion of second order ionospheric corrections and also the a priori modeling of Earth-reflected radiation pressure. Because of remaining unmodeled orbital forces, systematic errors will however likely continue to affect the origin of the repro2 frames and prevent a contribution of GNSS to the origin of the next ITRF. On the other hand, the planned inclusion of satellite phase center offsets in the long-term stacking of the repro2 frames could help in defining the scale rate of the next ITRF.
Toolbox of countermeasures for rural two-lane curves.
DOT National Transportation Integrated Search
2013-10-01
The Federal Highway Administration (FHWA) estimates that 58 percent of roadway fatalities are lane departures, while 40 : percent of fatalities are single-vehicle run-off-road (SVROR) crashes. Addressing lane-departure crashes is therefore a : priori...
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yin, Zeyang; Wei, Xing; Yuan, Jianping
2018-03-01
In this paper, a robust inertia-free attitude takeover control scheme with guaranteed prescribed performance is investigated for postcapture combined spacecraft with consideration of unmeasurable states, unknown inertial property and external disturbance torque. Firstly, to estimate the unavailable angular velocity of combination accurately, a novel finite-time-convergent tracking differentiator is developed with a quite computationally achievable structure free from the unknown nonlinear dynamics of combined spacecraft. Then, a robust inertia-free prescribed performance control scheme is proposed, wherein, the transient and steady-state performance of combined spacecraft is first quantitatively studied by stabilizing the filtered attitude tracking errors. Compared with the existing works, the prominent advantage is that no parameter identifications and no neural or fuzzy nonlinear approximations are needed, which decreases the complexity of robust controller design dramatically. Moreover, the prescribed performance of combined spacecraft is guaranteed a priori without resorting to repeated regulations of the controller parameters. Finally, four illustrative examples are employed to validate the effectiveness of the proposed control scheme and tracking differentiator. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Analysis of thin fractures with GPR: from theory to practice
NASA Astrophysics Data System (ADS)
Arosio, Diego; Zanzi, Luigi; Longoni, Laura; Papini, Monica
2017-04-01
Whenever we perform a GPR survey to investigate a rocky medium, being the ultimate purpose of the survey either to study the stability of a rock slope or to determine the soundness of a quarried rock block, we would like mainly to detect any fracture within the investigated medium and, possibly, to estimate the parameters of the fractures, namely thickness and filling material. In most of the practical cases, rock fracture thicknesses are very small when compared to the wavelength of the electromagnetic radiation generated by the GPR systems. In such cases, fractures are to be considered as thin beds, i.e. two interfaces whose distance is smaller than GPR resolving capability, and the reflected signal is the sum of the electromagnetic reverberation within the bed. According to this, fracture parameters are encoded in the thin bed complex response and in this work we propose a methodology based on deterministic deconvolution to process amplitude and phase information in the frequency domain to estimate fracture parameters. We first present some theoretical aspects related to thin bed response and a sensitivity analysis concerning fracture thickness and filling. Secondly, we deal with GPR datasets collected both during laboratory experiments and in the facilities of quarrying activities. In the lab tests fractures were simulated by placing materials with known electromagnetic parameters and controlled thickness in between two small marble blocks, whereas field GPR surveys were performed on bigger quarried ornamental stone blocks before they were submitted to the cutting process. We show that, with basic pre-processing and the choice of a proper deconvolving signal, results are encouraging although an ambiguity between thickness and filling estimates exists when no a-priori information is available. Results can be improved by performing CMP radar surveys that are able to provide additional information (i.e., variation of thin bed response versus offset) at the expense of acquisition effort and of more complex and tricky pre-processing sequences.
Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone
NASA Astrophysics Data System (ADS)
Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.
2017-12-01
The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
A Method for Improving Hotspot Directional Signatures in BRDF Models Used for MODIS
NASA Technical Reports Server (NTRS)
Jiao, Ziti; Schaaf, Crystal B.; Dong, Yadong; Roman, Miguel; Hill, Michael J.; Chen, Jing M.; Wang, Zhuosen; Zhang, Hu; Saenz, Edward; Poudyal, Rajesh;
2016-01-01
The semi-empirical, kernel-driven, linear RossThick-LiSparseReciprocal (RTLSR) Bidirectional Reflectance Distribution Function (BRDF) model is used to generate the routine MODIS BRDFAlbedo product due to its global applicability and the underlying physics. A challenge of this model in regard to surface reflectance anisotropy effects comes from its underestimation of the directional reflectance signatures near the Sun illumination direction; also known as the hotspot effect. In this study, a method has been developed for improving the ability of the RTLSR model to simulate the magnitude and width of the hotspot effect. The method corrects the volumetric scattering component of the RTLSR model using an exponential approximation of a physical hotspot kernel, which recreates the hotspot magnitude and width using two free parameters (C(sub 1) and C(sub 2), respectively). The approach allows one to reconstruct, with reasonable accuracy, the hotspot effect by adjusting or using the prior values of these two hotspot variables. Our results demonstrate that: (1) significant improvements in capturing hotspot effect can be made to this method by using the inverted hotspot parameters; (2) the reciprocal nature allow this method to be more adaptive for simulating the hotspot height and width with high accuracy, especially in cases where hotspot signatures are available; and (3) while the new approach is consistent with the heritage RTLSR model inversion used to estimate intrinsic narrowband and broadband albedos, it presents some differences for vegetation clumping index (CI) retrievals. With the hotspot-related model parameters determined a priori, this method offers improved performance for various ecological remote sensing applications; including the estimation of canopy structure parameters.
Dynamic dual-tracer PET reconstruction.
Gao, Fei; Liu, Huafeng; Jian, Yiqiang; Shi, Pengcheng
2009-01-01
Although of important medical implications, simultaneous dual-tracer positron emission tomography reconstruction remains a challenging problem, primarily because the photon measurements from dual tracers are overlapped. In this paper, we propose a simultaneous dynamic dual-tracer reconstruction of tissue activity maps based on guidance from tracer kinetics. The dual-tracer reconstruction problem is formulated in a state-space representation, where parallel compartment models serve as continuous-time system equation describing the tracer kinetic processes of dual tracers, and the imaging data is expressed as discrete sampling of the system states in measurement equation. The image reconstruction problem has therefore become a state estimation problem in a continuous-discrete hybrid paradigm, and H infinity filtering is adopted as the estimation strategy. As H infinity filtering makes no assumptions on the system and measurement statistics, robust reconstruction results can be obtained for the dual-tracer PET imaging system where the statistical properties of measurement data and system uncertainty are not available a priori, even when there are disturbances in the kinetic parameters. Experimental results on digital phantoms, Monte Carlo simulations and physical phantoms have demonstrated the superior performance.
Refined discrete and empirical horizontal gradients in VLBI analysis
NASA Astrophysics Data System (ADS)
Landskron, Daniel; Böhm, Johannes
2018-02-01
Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489-20502, 1997. https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.
1982-01-01
Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.
Carrillo, José Antonio; Colombi, Annachiara; Scianna, Marco
2018-05-14
The description of the cell spatial pattern and characteristic distances is fundamental in a wide range of physio-pathological biological phenomena, from morphogenesis to cancer growth. Discrete particle models are widely used in this field, since they are focused on the cell-level of abstraction and are able to preserve the identity of single individuals reproducing their behavior. In particular, a fundamental role in determining the usefulness and the realism of a particle mathematical approach is played by the choice of the intercellular pairwise interaction kernel and by the estimate of its parameters. The aim of the paper is to demonstrate how the concept of H-stability, deriving from statistical mechanics, can have important implications in this respect. For any given interaction kernel, it in fact allows to a priori predict the regions of the free parameter space that result in stable configurations of the system characterized by a finite and strictly positive minimal interparticle distance, which is fundamental when dealing with biological phenomena. The proposed analytical arguments are indeed able to restrict the range of possible variations of selected model coefficients, whose exact estimate however requires further investigations (e.g., fitting with empirical data), as illustrated in this paper by series of representative simulations dealing with cell colony reorganization, sorting phenomena and zebrafish embryonic development. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D
2016-05-01
An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Bearings Only Air-to-Air Ranging
1988-07-25
directly in fiut of the observer whem first detected, more time will be needed for a good estimate. A sound uinp them is for the observer, having...altitude angle to provide an estimate of the z component. Moving targets commonly require some 60 seconds for good estimates of target location and...fixed target case, where a good strategy for the observer can be determined a priori, highly effective maneuvers for the observer in the case of a moving
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron
2017-05-01
This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.
NASA Astrophysics Data System (ADS)
Penna, Pedro A. A.; Mascarenhas, Nelson D. A.
2018-02-01
The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.
Detection of multiple damages employing best achievable eigenvectors under Bayesian inference
NASA Astrophysics Data System (ADS)
Prajapat, Kanta; Ray-Chaudhuri, Samit
2018-05-01
A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.
NASA Technical Reports Server (NTRS)
Kirschner, S. M.; Samii, M. V.; Broaddus, S. R.; Doll, C. E.
1988-01-01
The Preliminary Orbit Determination System (PODS) provides early orbit determination capability in the Trajectory Computation and Orbital Products System (TCOPS) for a Tracking and Data Relay Satellite System (TDRSS)-tracked spacecraft. PODS computes a set of orbit states from an a priori estimate and six tracking measurements, consisting of any combination of TDRSS range and Doppler tracking measurements. PODS uses the homotopy continuation method to solve a set of nonlinear equations, and it is particularly effective for the case when the a priori estimate is not well known. Since range and Doppler measurements produce multiple states in PODS, a screening technique selects the desired state. PODS is executed in the TCOPS environment and can directly access all operational data sets. At the completion of the preliminary orbit determination, the PODS-generated state, along with additional tracking measurements, can be directly input to the differential correction (DC) process to generate an improved state. To validate the computational and operational capabilities of PODS, tests were performed using simulated TDRSS tracking measurements for the Cosmic Background Explorer (COBE) satellite and using real TDRSS measurements for the Earth Radiation Budget Satellite (ERBS) and the Solar Mesosphere Explorer (SME) spacecraft. The effects of various measurement combinations, varying arc lengths, and levels of degradation of the a priori state vector on the PODS solutions were considered.
Troposphere gradients from the ECMWF in VLBI analysis
NASA Astrophysics Data System (ADS)
Boehm, Johannes; Schuh, Harald
2007-06-01
Modeling path delays in the neutral atmosphere for the analysis of Very Long Baseline Interferometry (VLBI) observations has been improved significantly in recent years by the use of elevation-dependent mapping functions based on data from numerical weather models. In this paper, we present a fast way of extracting both, hydrostatic and wet, linear horizontal gradients for the troposphere from data of the European Centre for Medium-range Weather Forecasts (ECMWF) model, as it is realized at the Vienna University of Technology on a routine basis for all stations of the International GNSS (Global Navigation Satellite Systems) Service (IGS) and International VLBI Service for Geodesy and Astrometry (IVS) stations. This approach only uses information about the refractivity gradients at the site vertical, but no information from the line-of-sight. VLBI analysis of the CONT02 and CONT05 campaigns, as well as all IVS-R1 and IVS-R4 sessions in the first half of 2006, shows that fixing these a priori gradients improves the repeatability for 74% (40 out of 54) of the VLBI baseline lengths compared to fixing zero or constant a priori gradients, and improves the repeatability for the majority of baselines compared to estimating 24-h offsets for the gradients. Only if 6-h offsets are estimated, the baseline length repeatabilities significantly improve, no matter which a priori gradients are used.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
Extracting remanent magnetization from magnetic data inversion
NASA Astrophysics Data System (ADS)
Liu, S.; Fedi, M.; Baniamerian, J.; Hu, X.
2017-12-01
Remanent magnetization is an important vector parameter of rocks' and ores' magnetism, which is related to the intensity and direction of primary geomagnetic fields at all geological periods and hence shows critical evidences of geological tectonic movement and sedimentary evolution. We extract the remanence information from the distributions of the inverted magnetization vector. Firstly, directions of total magnetization vector are estimated from reduced-to-pole anomaly (max-min algorithm) and by its correlations with other magnitude magnetic transforms such as magnitude magnetic anomaly and normalized source strength. Then we invert data for the magnetization intensity and finally the intensity and direction of the remanent magnetization are separated from the total magnetization vector with a generalized formula of the apparent susceptibility based on a priori information on the Koenigsberger ratio. Our approach is used to investigate the targeted resources and geologic processes of the mining areas in China.
Self-constrained inversion of potential fields
NASA Astrophysics Data System (ADS)
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter
NASA Astrophysics Data System (ADS)
Chang, M.; Kang, Z.
2017-09-01
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
In-Flight System Identification
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1998-01-01
A method is proposed and studied whereby the system identification cycle consisting of experiment design and data analysis can be repeatedly implemented aboard a test aircraft in real time. This adaptive in-flight system identification scheme has many advantages, including increased flight test efficiency, adaptability to dynamic characteristics that are imperfectly known a priori, in-flight improvement of data quality through iterative input design, and immediate feedback of the quality of flight test results. The technique uses equation error in the frequency domain with a recursive Fourier transform for the real time data analysis, and simple design methods employing square wave input forms to design the test inputs in flight. Simulation examples are used to demonstrate that the technique produces increasingly accurate model parameter estimates resulting from sequentially designed and implemented flight test maneuvers. The method has reasonable computational requirements, and could be implemented aboard an aircraft in real time.
Retrieval of the Nitrous Oxide Profiles using the AIRS Data in China
NASA Astrophysics Data System (ADS)
Chen, L.; Ma, P.; Tao, J.; Li, X.; Zhang, Y.; Wang, Z.; Li, S.; Xiong, X.
2014-12-01
As an important greenhouse gas and ozone-depleting substance, the 100-year global warming potential of Nitrous Oxide (N2O) is almost 300 times higher than that of carbon dioxide. However, there are still large uncertainties about the quantitative N2O emission and its feedback to climate change due to the coarse ground-based network. This approach attempts to retrieve the N2O profiles from the Atmospheric InfraRed Sounder (AIRS) data. First, the sensitivity of atmospheric temperature and humidity profiles and surface parameters between two spectral absorption bands were simulated by using the radiative transfer model. Second, the eigenvector regression algorithm is used to construct a priori state. Third, an optimal estimate method was developed based on the band selection of N2O. Finally, we compared our retrieved AIRS profiles with HIPPO data, and analyzed the seasonal and annual N2O distribution in China from 2004 to 2013.
Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun
2018-03-29
This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration.
Probabilistic and deterministic aspects of linear estimation in geodesy. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dermanis, A.
1976-01-01
Recent advances in observational techniques related to geodetic work (VLBI, laser ranging) make it imperative that more consideration should be given to modeling problems. Uncertainties in the effect of atmospheric refraction, polar motion and precession-nutation parameters, cannot be dispensed with in the context of centimeter level geodesy. Even physical processes that have generally been previously altogether neglected (station motions) must now be taken into consideration. The problem of modeling functions of time or space, or at least their values at observation points (epochs) is explored. When the nature of the function to be modeled is unknown. The need to include a limited number of terms and to a priori decide upon a specific form may result in a representation which fails to sufficiently approximate the unknown function. An alternative approach of increasing application is the modeling of unknown functions as stochastic processes.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
Added-value joint source modelling of seismic and geodetic data
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank
2013-04-01
In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.
NASA Astrophysics Data System (ADS)
Booker, David; Clarke, Peter J.; Lavallée, David A.
2014-09-01
The changing distribution of surface mass (oceans, atmospheric pressure, continental water storage, groundwater, lakes, snow and ice) causes detectable changes in the shape of the solid Earth, on time scales ranging from hours to millennia. Transient changes in the Earth's shape can, regardless of cause, be readily separated from steady secular variation in surface mass loading, but other secular changes due to plate tectonics and glacial isostatic adjustment (GIA) cannot. We estimate secular station velocities from almost 11 years of high quality combined GPS position solutions (GPS weeks 1,000-1,570) submitted as part of the first international global navigation satellite system service reprocessing campaign. Individual station velocities are estimated as a linear fit, paying careful attention to outliers and offsets. We remove a suite of a priori GIA models, each with an associated set of plate tectonic Euler vectors estimated by us; the latter are shown to be insensitive to the a priori GIA model. From the coordinate time series residuals after removing the GIA models and corresponding plate tectonic velocities, we use mass-conserving continental basis functions to estimate surface mass loading including the secular term. The different GIA models lead to significant differences in the estimates of loading in selected regions. Although our loading estimates are broadly comparable with independent estimates from other satellite missions, their range highlights the need for better, more robust GIA models that incorporate 3D Earth structure and accurately represent 3D surface displacements.
NASA Astrophysics Data System (ADS)
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Joint inversion of apparent resistivity and seismic surface and body wave data
NASA Astrophysics Data System (ADS)
Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle
2013-04-01
A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.
Precise orbit determination based on raw GPS measurements
NASA Astrophysics Data System (ADS)
Zehentner, Norbert; Mayer-Gürr, Torsten
2016-03-01
Precise orbit determination is an essential part of the most scientific satellite missions. Highly accurate knowledge of the satellite position is used to geolocate measurements of the onboard sensors. For applications in the field of gravity field research, the position itself can be used as observation. In this context, kinematic orbits of low earth orbiters (LEO) are widely used, because they do not include a priori information about the gravity field. The limiting factor for the achievable accuracy of the gravity field through LEO positions is the orbit accuracy. We make use of raw global positioning system (GPS) observations to estimate the kinematic satellite positions. The method is based on the principles of precise point positioning. Systematic influences are reduced by modeling and correcting for all known error sources. Remaining effects such as the ionospheric influence on the signal propagation are either unknown or not known to a sufficient level of accuracy. These effects are modeled as unknown parameters in the estimation process. The redundancy in the adjustment is reduced; however, an improvement in orbit accuracy leads to a better gravity field estimation. This paper describes our orbit determination approach and its mathematical background. Some examples of real data applications highlight the feasibility of the orbit determination method based on raw GPS measurements. Its suitability for gravity field estimation is presented in a second step.
NASA Astrophysics Data System (ADS)
Luo, X.; Heck, B.; Awange, J. L.
2013-12-01
Global Navigation Satellite Systems (GNSS) are emerging as possible tools for remote sensing high-resolution atmospheric water vapour that improves weather forecasting through numerical weather prediction models. Nowadays, the GNSS-derived tropospheric zenith total delay (ZTD), comprising zenith dry delay (ZDD) and zenith wet delay (ZWD), is achievable with sub-centimetre accuracy. However, if no representative near-site meteorological information is available, the quality of the ZDD derived from tropospheric models is degraded, leading to inaccurate estimation of the water vapour component ZWD as difference between ZTD and ZDD. On the basis of freely accessible regional surface meteorological data, this paper proposes a height-dependent linear correction model for a priori ZDD. By applying the ordinary least-squares estimation (OLSE), bootstrapping (BOOT), and leave-one-out cross-validation (CROS) methods, the model parameters are estimated and analysed with respect to outlier detection. The model validation is carried out using GNSS stations with near-site meteorological measurements. The results verify the efficiency of the proposed ZDD correction model, showing a significant reduction in the mean bias from several centimetres to about 5 mm. The OLSE method enables a fast computation, while the CROS procedure allows for outlier detection. All the three methods produce consistent results after outlier elimination, which improves the regression quality by about 20% and the model accuracy by up to 30%.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
NASA Astrophysics Data System (ADS)
Blossfeld, M.; Schmidt, M.; Erdogan, E.
2016-12-01
The thermospheric neutral density plays a crucial role within the equation of motion of Earth orbiting objects since drag, lift or side forces are one of the largest non-gravitational perturbations acting on the satellite. Precise Orbit Determination (POD) methods can be used to estimate thermospheric density variations from measured orbit determinations. One method which provides highly accurate measurements of the satellite position is Satellite Laser Ranging (SLR). Within the POD process, scaling factors are estimated frequently. These scaling factors can be either used for the scaling of the so called satellite-specific drag (ballistic) coefficients or the integrated thermospheric neutral density. We present a method for analytically model the drag coefficient based on a couple of physical assumptions and key parameters. In this paper, we investigate the possibility to use SLR observations to the very low Earth orbiting satellite ANDE-Pollux (approximately at 350km altitude) to determine scaling factors for different a priori thermospheric density models. We perform a POD for ANDE-Pollux covering 49 days between August 2009 and September 2009 which means the time span containing the largest number of observations during the short lifetime of the satellite. Finally, we compare the obtained scaled thermospheric densities w.r.t. each other
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2015-06-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.
NASA Astrophysics Data System (ADS)
Corbin, A. E.; Timmermans, J.; Hauser, L.; Bodegom, P. V.; Soudzilovskaia, N. A.
2017-12-01
There is a growing demand for accurate land surface parameterization from remote sensing (RS) observations. This demand has not been satisfied, because most estimation schemes apply 1) a single-sensor single-scale approach, and 2) require specific key-variables to be `guessed'. This is because of the relevant observational information required to accurately retrieve parameters of interest. Consequently, many schemes assume specific variables to be constant or not present; subsequently leading to more uncertainty. In this aspect, the MULTIscale SENTINEL land surface information retrieval Platform (MULTIPLY) was created. MULTIPLY couples a variety of RS sources with Radiative Transfer Models (RTM) over varying spectral ranges using data-assimilation to estimate geophysical parameters. In addition, MULTIPLY also uses prior information about the land surface to constrain the retrieval problem. This research aims to improve the retrieval of plant biophysical parameters through the use of priors of biophysical parameters/plant traits. Of particular interest are traits (physical, morphological or chemical trait) affecting individual performance and fitness of species. Plant traits that are able to be retrieved via RS and with RTMs include traits such as leaf-pigments, leaf water, LAI, phenols, C/N, etc. In-situ data for plant traits that are retrievable via RS techniques were collected for a meta-analysis from databases such as TRY, Ecosis, and individual collaborators. Of particular interest are the following traits: chlorophyll, carotenoids, anthocyanins, phenols, leaf water, and LAI. ANOVA statistics were generated for each traits according to species, plant functional groups (such as evergreens, grasses, etc.), and the trait itself. Afterwards, traits were also compared using covariance matrices. Using these as priors, MULTIPLY was is used to retrieve several plant traits in two validation sites in the Netherlands (Speulderbos) and in Finland (Sodankylä). Initial comparisons show significant improved results over non-a priori based retrievals.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali
2015-04-01
The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.
Incremental cost effectiveness evaluation in clinical research.
Krummenauer, Frank; Landwehr, I
2005-01-28
The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori definition of primary hypotheses (formulated as an allocation rule involving a pre-specified willingness to pay benchmark) and the primary clinical benefit endpoint (as a rationale for effectiveness evaluation).
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
Estimating Allee dynamics before they can be observed: polar bears as a case study.
Molnár, Péter K; Lewis, Mark A; Derocher, Andrew E
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species.
Estimating Allee Dynamics before They Can Be Observed: Polar Bears as a Case Study
Molnár, Péter K.; Lewis, Mark A.; Derocher, Andrew E.
2014-01-01
Allee effects are an important component in the population dynamics of numerous species. Accounting for these Allee effects in population viability analyses generally requires estimates of low-density population growth rates, but such data are unavailable for most species and particularly difficult to obtain for large mammals. Here, we present a mechanistic modeling framework that allows estimating the expected low-density growth rates under a mate-finding Allee effect before the Allee effect occurs or can be observed. The approach relies on representing the mechanisms causing the Allee effect in a process-based model, which can be parameterized and validated from data on the mechanisms rather than data on population growth. We illustrate the approach using polar bears (Ursus maritimus), and estimate their expected low-density growth by linking a mating dynamics model to a matrix projection model. The Allee threshold, defined as the population density below which growth becomes negative, is shown to depend on age-structure, sex ratio, and the life history parameters determining reproduction and survival. The Allee threshold is thus both density- and frequency-dependent. Sensitivity analyses of the Allee threshold show that different combinations of the parameters determining reproduction and survival can lead to differing Allee thresholds, even if these differing combinations imply the same stable-stage population growth rate. The approach further shows how mate-limitation can induce long transient dynamics, even in populations that eventually grow to carrying capacity. Applying the models to the overharvested low-density polar bear population of Viscount Melville Sound, Canada, shows that a mate-finding Allee effect is a plausible mechanism for slow recovery of this population. Our approach is generalizable to any mating system and life cycle, and could aid proactive management and conservation strategies, for example, by providing a priori estimates of minimum conservation targets for rare species or minimum eradication targets for pests and invasive species. PMID:24427306
Aquifer Hydrogeologic Layer Zonation at the Hanford Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savelieva-Trofimova, Elena A.; Kanevski, Mikhail; timonin, v.
2003-09-10
Sedimentary aquifer layers are characterized by spatial variability of hydraulic properties. Nevertheless, zones with similar values of hydraulic parameters (parameter zones) can be distinguished. This parameter zonation approach is an alternative to the analysis of spatial variation of the continuous hydraulic parameters. The parameter zonation approach is primarily motivated by the lack of measurements that would be needed for direct spatial modeling of the hydraulic properties. The current work is devoted to the problem of zonation of the Hanford formation, the uppermost sedimentary aquifer unit (U1) included in hydrogeologic models at the Hanford site. U1 is characterized by 5 zonesmore » with different hydraulic properties. Each sampled location is ascribed to a parameter zone by an expert. This initial classification is accompanied by a measure of quality (also indicated by an expert) that addresses the level of classification confidence. In the current study, the coneptual zonation map developed by an expert geologist was used as an a priori model. The parameter zonation problem was formulated as a multiclass classification task. Different geostatistical and machine learning algorithms were adapted and applied to solve this problem, including: indicator kriging, conditional simulations, neural networks of different architectures, and support vector machines. All methods were trained using additional soft information based on expert estimates. Regularization methods were used to overcome possible overfitting. The zonation problem was complicated because there were few samples for some zones (classes) and by the spatial non-stationarity of the data. Special approaches were developed to overcome these complications. The comparison of different methods was performed using qualitative and quantitative statistical methods and image analysis. We examined the correspondence of the results with the geologically based interpretation, including the reproduction of the spatial orientation of the different classes and the spatial correlation structure of the classes. The uncertainty of the classification task was examined using both probabilistic interpretation of the estimators and by examining the results of a set of stochastic realizations. Characterization of the classification uncertainty is the main advantage of the proposed methods.« less
Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P
2015-01-01
The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.
An Evidence-Based Systematic Review on Cognitive Interventions for Individuals with Dementia
ERIC Educational Resources Information Center
Hopper, Tammy; Bourgeois, Michelle; Pimentel, Jane; Qualls, Constance Dean; Hickey, Ellen; Frymark, Tobi; Schooling, Tracy
2013-01-01
Purpose: To evaluate the current state of research evidence related to cognitive interventions for individuals with Alzheimer's disease or related dementias. Method: A systematic search of the literature was conducted across 27 electronic databases based on a set of a priori questions, inclusion/exclusion criteria, and search parameters. Studies…
Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
Double absorbing boundaries for finite-difference time-domain electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu
We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.
Time-lapse Inversion of Electrical Resistivity Data
NASA Astrophysics Data System (ADS)
Nguyen, F.; Kemna, A.
2005-12-01
Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).
Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088
2007-01-15
The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Uncertainty analysis for fluorescence tomography with Monte Carlo method
NASA Astrophysics Data System (ADS)
Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann
2011-07-01
Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.
On the orthogonalised reverse path method for nonlinear system identification
NASA Astrophysics Data System (ADS)
Muhamad, P.; Sims, N. D.; Worden, K.
2012-09-01
The problem of obtaining the underlying linear dynamic compliance matrix in the presence of nonlinearities in a general multi-degree-of-freedom (MDOF) system can be solved using the conditioned reverse path (CRP) method introduced by Richards and Singh (1998 Journal of Sound and Vibration, 213(4): pp. 673-708). The CRP method also provides a means of identifying the coefficients of any nonlinear terms which can be specified a priori in the candidate equations of motion. Although the CRP has proved extremely useful in the context of nonlinear system identification, it has a number of small issues associated with it. One of these issues is the fact that the nonlinear coefficients are actually returned in the form of spectra which need to be averaged over frequency in order to generate parameter estimates. The parameter spectra are typically polluted by artefacts from the identification of the underlying linear system which manifest themselves at the resonance and anti-resonance frequencies. A further problem is associated with the fact that the parameter estimates are extracted in a recursive fashion which leads to an accumulation of errors. The first minor objective of this paper is to suggest ways to alleviate these problems without major modification to the algorithm. The results are demonstrated on numerically-simulated responses from MDOF systems. In the second part of the paper, a more radical suggestion is made, to replace the conditioned spectral analysis (which is the basis of the CRP method) with an alternative time domain decorrelation method. The suggested approach - the orthogonalised reverse path (ORP) method - is illustrated here using data from simulated single-degree-of-freedom (SDOF) and MDOF systems.
NASA Astrophysics Data System (ADS)
Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien
2015-04-01
The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.
Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P
2015-10-01
Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.
NASA Astrophysics Data System (ADS)
Newman, A. J.; Sampson, K. M.; Wood, A. W.; Hopson, T. M.; Brekke, L. D.; Arnold, J.; Raff, D. A.; Clark, M. P.
2013-12-01
Skill in model-based hydrologic forecasting depends on the ability to estimate a watershed's initial moisture and energy conditions, to forecast future weather and climate inputs, and on the quality of the hydrologic model's representation of watershed processes. The impact of these factors on prediction skill varies regionally, seasonally, and by model. We are investigating these influences using a watershed simulation platform that spans the continental US (CONUS), encompassing a broad range of hydroclimatic variation, and that uses the current simulation models of National Weather Service streamflow forecasting operations. The first phase of this effort centered on the implementation and calibration of the SNOW-17 and Sacramento soil moisture accounting (SAC-SMA) based hydrologic modeling system for a range of watersheds. The base configuration includes 630 basins in the United States Geological Survey's Hydro-Climatic Data Network 2009 (HCDN-2009, Lins 2012) conterminous U.S. basin subset. Retrospective model forcings were derived from Daymet (http://daymet.ornl.gov/), and where available, a priori parameter estimates were based on or compared with the operational NWS model parameters. Model calibration was accomplished by several objective, automated strategies, including the shuffled complex evolution (SCE) optimization approach developed within the NWS in the early 1990s (Duan et al. 1993). This presentation describes outcomes from this effort, including insights about measuring simulation skill, and on relationships between simulation skill and model parameters, basin characteristics (climate, topography, vegetation, soils), and the quality of forcing inputs. References: %Z Thornton, P.; Thornton, M.; Mayer, B.; Wilhelmi, N.; Wei, Y.; Devarakonda, R; Cook, R. Daymet: Daily Surface Weather on a 1 km Grid for North America. 1980-2008; Oak Ridge National Laboratory Distributed Active Archive Center: Oak Ridge, TN, USA, 2012; Volume 10.
A priori analysis: an application to the estimate of the uncertainty in course grades
NASA Astrophysics Data System (ADS)
Lippi, G. L.
2014-07-01
A priori analysis (APA) is discussed as a tool to assess the reliability of grades in standard curricular courses. This unusual, but striking, application is presented when teaching the section on the data treatment of a laboratory course to illustrate the characteristics of the APA and its potential for widespread use, beyond the traditional physics curriculum. The conditions necessary for this kind of analysis are discussed, the general framework is set out and a specific example is given to illustrate its various aspects. Students are often struck by this unusual application and are more apt to remember the APA. Instructors may also benefit from some of the gathered information, as discussed in the paper.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO)
Atmospheric Science Data Center
2018-05-06
TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO) Atmospheric ... profile estimates and associated errors derived using TES & MLS spectral radiance measurements taken at nearest time and locations. ... a priori constraint vectors. News: TES News Join TES News List Project Title: TES ...
TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO)
Atmospheric Science Data Center
2018-05-07
TES/MLS Aura L2 Carbon Monoxide (CO) Nadir (TML2CO) ... profile estimates and associated errors derived using TES & MLS spectral radiance measurements taken at nearest time and locations. ... a priori constraint vectors. News: TES News Join TES News List Project Title: TES ...
Application of Bayesian a Priori Distributions for Vehicles' Video Tracking Systems
NASA Astrophysics Data System (ADS)
Mazurek, Przemysław; Okarma, Krzysztof
Intelligent Transportation Systems (ITS) helps to improve the quality and quantity of many car traffic parameters. The use of the ITS is possible when the adequate measuring infrastructure is available. Video systems allow for its implementation with relatively low cost due to the possibility of simultaneous video recording of a few lanes of the road at a considerable distance from the camera. The process of tracking can be realized through different algorithms, the most attractive algorithms are Bayesian, because they use the a priori information derived from previous observations or known limitations. Use of this information is crucial for improving the quality of tracking especially for difficult observability conditions, which occur in the video systems under the influence of: smog, fog, rain, snow and poor lighting conditions.
DeMars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-01-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters. PMID:24324866
Demars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-10-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters.
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
NASA Astrophysics Data System (ADS)
Belda, Santiago; Heinkelmann, Robert; Ferrándiz, José M.; Karbon, Maria; Nilsson, Tobias; Schuh, Harald
2017-10-01
Very Long Baseline Interferometry (VLBI) is the only space geodetic technique capable of measuring all the Earth orientation parameters (EOP) accurately and simultaneously. Modeling the Earth's rotational motion in space within the stringent consistency goals of the Global Geodetic Observing System (GGOS) makes VLBI observations essential for constraining the rotation theories. However, the inaccuracy of early VLBI data and the outdated products could cause non-compliance with these goals. In this paper, we perform a global VLBI analysis of sessions with different processing settings to determine a new set of empirical corrections to the precession offsets and rates, and to the amplitudes of a wide set of terms included in the IAU 2006/2000A precession-nutation theory. We discuss the results in terms of consistency, systematic errors, and physics of the Earth. We find that the largest improvements w.r.t. the values from IAU 2006/2000A precession-nutation theory are associated with the longest periods (e.g., 18.6-yr nutation). A statistical analysis of the residuals shows that the provided corrections attain an error reduction at the level of 15 μas. Additionally, including a Free Core Nutation (FCN) model into a priori Celestial Pole Offsets (CPOs) provides the lowest Weighted Root Mean Square (WRMS) of residuals. We show that the CPO estimates are quite insensitive to TRF choice, but slightly sensitive to the a priori EOP and the inclusion of different VLBI sessions. Finally, the remaining residuals reveal two apparent retrograde signals with periods of nearly 2069 and 1034 days.
Quantifying probabilities of eruptions at Mount Etna (Sicily, Italy).
NASA Astrophysics Data System (ADS)
Brancato, Alfonso
2010-05-01
One of the major goals of modern volcanology is to set up sound risk-based decision-making in land-use planning and emergency management. Volcanic hazard must be managed with reliable estimates of quantitative long- and short-term eruption forecasting, but the large number of observables involved in a volcanic process suggests that a probabilistic approach could be a suitable tool in forecasting. The aim of this work is to quantify probabilistic estimate of the vent location for a suitable lava flow hazard assessment at Mt. Etna volcano, through the application of the code named BET (Marzocchi et al., 2004, 2008). The BET_EF model is based on the event tree philosophy assessed by Newhall and Hoblitt (2002), further developing the concept of vent location, epistemic uncertainties, and a fuzzy approach for monitoring measurements. A Bayesian event tree is a specialized branching graphical representation of events in which individual branches are alternative steps from a general prior event, and evolving into increasingly specific subsequent states. Then, the event tree attempts to graphically display all relevant possible outcomes of volcanic unrest in progressively higher levels of detail. The procedure is set to estimate an a priori probability distribution based upon theoretical knowledge, to accommodate it by using past data, and to modify it further by using current monitoring data. For the long-term forecasting, an a priori model, dealing with the present tectonic and volcanic structure of the Mt. Etna, is considered. The model is mainly based on past vent locations and fracture location datasets (XX century of eruptive history of the volcano). Considering the variation of the information through time, and their relationship with the structural setting of the volcano, datasets we are also able to define an a posteriori probability map for next vent opening. For short-term forecasting vent opening hazard assessment, the monitoring has a leading role, primarily based on seismological and volcanological data, integrated with strain, geochemical, gravimetric and magnetic parameters. In the code, is necessary to fix an appropriate forecasting time window. On open-conduit volcanoes as Mt. Etna, a forecast time window of a month (as fixed in other applications worldwide) seems unduly long, because variations of the state of the volcano (significant variation of a specific monitoring parameter could occur in time scale shorter than the forecasting time window) are expected with shorter time scale (hour, day or week). This leads to set a week as forecasting time window, coherently with the number of weeks in which an unrest has been experienced. The short-term vent opening hazard assessment will be estimated during an unrest phase; the testing case (2001 July eruption) will include all the monitoring parameters collected at Mt. Etna during the six months preceding the eruption. The monitoring role has been assessed eliciting more than 50 parameters, including seismic activity, ground deformation, geochemistry, gravity, magnetism, and distributed inside the first three nodes of the procedure. Parameter values describe the Mt. Etna volcano activity, being more detailed through the code, particularly in time units. The methodology allows all assumptions and thresholds to be clearly identified and provides a rational means for their revision if new data or information are incoming. References Newhall C.G. and Hoblitt R.P.; 2002: Constructing event trees for volcanic crises, Bull. Volcanol., 64, 3-20, doi: 10.1007/s0044500100173. Marzocchi W., Sandri L., Gasparini P., Newhall C. and Boschi E.; 2004: Quantifying probabilities of volcanic events: The example of volcanic hazard at Mount Vesuvius, J. Geophys. Res., 109, B11201, doi:10.1029/2004JB00315U. Marzocchi W., Sandri, L. and Selva, J.; 2008: BET_EF: a probabilistic tool for long- and short-term eruption forecasting, Bull. Volcanol., 70, 623 - 632, doi: 10.1007/s00445-007-0157-y.
NASA Astrophysics Data System (ADS)
Humair, F.; Matasci, B.; Carrea, D.; Pedrazzini, A.; Loye, A.; Pedrozzi, G.; Nicolet, P.; Jaboyedoff, M.
2012-04-01
In numerical rockfall simulation, the runout of rockfall is highly dependent of the restitution coefficients which are one of the key parameters to estimate the energy and simulate the rebounds of the blocks during their travel. Restitution coefficients values derived from literature may however not be adapted to every rockfall area as they do not integrate some of the influencing parameters as, among others, block shape rock size, soil cover… The aim is to illustrate how real size rockfall experiment can improve the reliability of computational trajectory simulations of rockfall propagation by calibrating these latter with experiment extracted results. Experimental rockfall tests were performed in the slopes of Monte Generoso area (lat 720850/ long 84830) which is located in the canton of Ticino in south Switzerland above a highway. The field site is a forested area with a thin soil cover on a bedrock characterized by massive carbonates. The elevation ranges between 894m and 322m above see level with a slope of 35 to 40° in the upper part, 60 to 89° in the medium part and 28 to 38° in the lower part. 22 blocks with different size and shape were manually released down, imparting little or no initial velocity. The failing blocks were coloured to make the impacts easier to recognize. The paths of the failing blocks are recorded using two high speed cameras and the impacts of the blocks were sampled using dGNSS. The rockfall trajectories were analysed based on the movies. As the movies have to be referenced in x and y direction, the distance between two known point in the terrain as well as the position of the cameras were measured prior to the blocks throws. Measurements of bounce height, angular and translational velocity (as well as energy) and restitution coefficients (normal kn and tangential kt) were attempt to be deduced from the movies. First, a-priori simulations are compared with the real size experiment throw. Then a-fortiori simulations taking into account the results of the experimental testing are performed and compared with the a-priori simulations. 3D simulations were performed using a software that takes into account the effect of the forest cover in the blocky trajectory (RockyFor 3D) and an other that neglects this aspect (Rotomap; geo&soft international). 2D simulation (RocFall; Rocscience) profiles were located in the blocks paths deduced from 3D simulations. The preliminary results show that: (1) high speed movies are promising and allow us to track the blocks using video software, (2) the a-priori simulations tend to overestimate the runout distance which is certainly due to an underestimation of the obstacles as well as the breaking of the failing rocks which is not taken into account in the models, (3) the trajectories deduced from both a-priori simulation and real size experiment highlights the major influence of the channelized slope morphology on rock paths as it tends to follow the flow direction. This indicates that the 2D simulation have to be performed along the line of flow direction.
An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application
NASA Astrophysics Data System (ADS)
Stachnik, J.; Rozhkov, M.; Baker, B.
2016-12-01
According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.
NASA Technical Reports Server (NTRS)
Tsaoussi, Lucia S.; Koblinsky, Chester J.
1994-01-01
In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.
Golden beam data for proton pencil-beam scanning.
Clasie, Benjamin; Depauw, Nicolas; Fransen, Maurice; Gomà, Carles; Panahandeh, Hamid Reza; Seco, Joao; Flanz, Jacob B; Kooy, Hanne M
2012-03-07
Proton, as well as other ion, beams applied by electro-magnetic deflection in pencil-beam scanning (PBS) are minimally perturbed and thus can be quantified a priori by their fundamental interactions in a medium. This a priori quantification permits an optimal reduction of characterizing measurements on a particular PBS delivery system. The combination of a priori quantification and measurements will then suffice to fully describe the physical interactions necessary for treatment planning purposes. We consider, for proton beams, these interactions and derive a 'Golden' beam data set. The Golden beam data set quantifies the pristine Bragg peak depth-dose distribution in terms of primary, multiple Coulomb scatter, and secondary, nuclear scatter, components. The set reduces the required measurements on a PBS delivery system to the measurement of energy spread and initial phase space as a function of energy. The depth doses are described in absolute units of Gy(RBE) mm² Gp⁻¹, where Gp equals 10⁹ (giga) protons, thus providing a direct mapping from treatment planning parameters to integrated beam current. We used these Golden beam data on our PBS delivery systems and demonstrated that they yield absolute dosimetry well within clinical tolerance.
Resting State Network Estimation in Individual Subjects
Hacker, Carl D.; Laumann, Timothy O.; Szrama, Nicholas P.; Baldassarre, Antonello; Snyder, Abraham Z.
2014-01-01
Resting-state functional magnetic resonance imaging (fMRI) has been used to study brain networks associated with both normal and pathological cognitive function. The objective of this work is to reliably compute resting state network (RSN) topography in single participants. We trained a supervised classifier (multi-layer perceptron; MLP) to associate blood oxygen level dependent (BOLD) correlation maps corresponding to pre-defined seeds with specific RSN identities. Hard classification of maps obtained from a priori seeds was highly reliable across new participants. Interestingly, continuous estimates of RSN membership retained substantial residual error. This result is consistent with the view that RSNs are hierarchically organized, and therefore not fully separable into spatially independent components. After training on a priori seed-based maps, we propagated voxel-wise correlation maps through the MLP to produce estimates of RSN membership throughout the brain. The MLP generated RSN topography estimates in individuals consistent with previous studies, even in brain regions not represented in the training data. This method could be used in future studies to relate RSN topography to other measures of functional brain organization (e.g., task-evoked responses, stimulation mapping, and deficits associated with lesions) in individuals. The multi-layer perceptron was directly compared to two alternative voxel classification procedures, specifically, dual regression and linear discriminant analysis; the perceptron generated more spatially specific RSN maps than either alternative. PMID:23735260
Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes
NASA Technical Reports Server (NTRS)
Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.
2006-01-01
Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.
The Art and Science of Climate Model Tuning
Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...
2017-03-31
The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less
A new analytical solar radiation pressure model for current BeiDou satellites: IGGBSPM
Tan, Bingfeng; Yuan, Yunbin; Zhang, Baocheng; Hsu, Hou Ze; Ou, Jikun
2016-01-01
An analytical solar radiation pressure (SRP) model, IGGBSPM (an abbreviation for Institute of Geodesy and Geophysics BeiDou Solar Pressure Model), has been developed for three BeiDou satellite types, namely, geostationary orbit (GEO), inclined geosynchronous orbit (IGSO) and medium earth orbit (MEO), based on a ray-tracing method. The performance of IGGBSPM was assessed based on numerical integration, SLR residuals and analyses of empirical SRP parameters (except overlap computations). The numerical results show that the integrated orbit resulting from IGGBSPM differs from the precise ephemerides by approximately 5 m and 2 m for GEO and non-GEO satellites, respectively. Moreover, when IGGBSPM is used as an a priori model to enhance the ECOM (5-parameter) model with stochastic pulses, named ECOM + APR, for precise orbit determination, the SLR RMS residual improves by approximately 20–25 percent over the ECOM-only solution during the yaw-steering period and by approximately 40 percent during the yaw-fixed period. For the BeiDou GEO01 satellite, improvements of 18 and 32 percent can be achieved during the out-of-eclipse season and during the eclipse season, respectively. An investigation of the estimated ECOM D0 parameters indicated that the β-angle dependence that is evident in the ECOM-only solution is no longer present in the ECOM + APR solution. PMID:27595795
The Art and Science of Climate Model Tuning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew
The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less
A new analytical solar radiation pressure model for current BeiDou satellites: IGGBSPM.
Tan, Bingfeng; Yuan, Yunbin; Zhang, Baocheng; Hsu, Hou Ze; Ou, Jikun
2016-09-06
An analytical solar radiation pressure (SRP) model, IGGBSPM (an abbreviation for Institute of Geodesy and Geophysics BeiDou Solar Pressure Model), has been developed for three BeiDou satellite types, namely, geostationary orbit (GEO), inclined geosynchronous orbit (IGSO) and medium earth orbit (MEO), based on a ray-tracing method. The performance of IGGBSPM was assessed based on numerical integration, SLR residuals and analyses of empirical SRP parameters (except overlap computations). The numerical results show that the integrated orbit resulting from IGGBSPM differs from the precise ephemerides by approximately 5 m and 2 m for GEO and non-GEO satellites, respectively. Moreover, when IGGBSPM is used as an a priori model to enhance the ECOM (5-parameter) model with stochastic pulses, named ECOM + APR, for precise orbit determination, the SLR RMS residual improves by approximately 20-25 percent over the ECOM-only solution during the yaw-steering period and by approximately 40 percent during the yaw-fixed period. For the BeiDou GEO01 satellite, improvements of 18 and 32 percent can be achieved during the out-of-eclipse season and during the eclipse season, respectively. An investigation of the estimated ECOM D0 parameters indicated that the β-angle dependence that is evident in the ECOM-only solution is no longer present in the ECOM + APR solution.
A Comparison of Aerosol Measurements from OCO-2 and MODIS
NASA Astrophysics Data System (ADS)
Nelson, R. R.; O'Dell, C.
2016-12-01
The goal of OCO-2 is to use hyperspectral measurements of reflected near-infrared sunlight to retrieve carbon dioxide with high accuracy and precision. This is only possible, however, if the light-path modification effects caused by clouds and aerosols are properly quantified. Even tiny amounts of clouds or aerosols can induce sufficient light-path modifications to lead to large errors in the estimated CO2 column-mean (XCO2). Therefore, it is imperative to evaluate the accuracy of the OCO-2 retrieved aerosol parameters. In this study, we compare OCO-2 retrieved aerosol parameters to Aqua-MODIS observations co-located in time and space. We find that there are significant disagreements between the aerosol information derived from MODIS and the retrieved aerosol parameters from OCO-2. These results are unsurprising, as previous comparisons to AERONET have also been poor. However, the tight co-location between Aqua and OCO-2 in the Afternoon Constellation allows us to examine the potential synergistic use of OCO-2 and MODIS measurements to more accurately constrain aerosol properties, potentially leading to a more accurate CO2 measurement. Specifically, we used select MODIS aerosol properties as the a priori for the OCO-2 retrievals and present the results here. Future studies include investigating the possibility of ingesting the MODIS radiances directly into the OCO-2 retrieval algorithm to further improve OCO-2's aerosol scheme and the resulting measurements.
W. Mark Ford; Andrew M. Evans; Richard H. Odom; Jane L. Rodrigue; Christine A. Kelly; Nicole Abaid; Corinne A. Diggins; Douglas Newcomb
2015-01-01
In the southern Appalachians, artificial nest-boxes are used to survey for the endangered Carolina northern flying squirrel (CNFS; Glaucomys sabrinus coloratus), a disjunct subspecies associated with high elevation (>1385 m) forests. Using environmental parameters diagnostic of squirrel habitat, we created 35 a priori occupancy...
GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters
NASA Astrophysics Data System (ADS)
Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.
2003-12-01
The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.
Nonlinear Spectral Mixture Modeling to Estimate Water-Ice Abundance of Martian Regolith
NASA Astrophysics Data System (ADS)
Gyalay, Szilard; Chu, Kathryn; Zeev Noe Dobrea, Eldar
2017-10-01
We present a novel technique to estimate the abundance of water-ice in the Martian permafrost using Phoenix Surface Stereo Imager multispectral data. In previous work, Cull et al. (2010) estimated the abundance of water-ice in trenches dug by the Mars Phoenix lander by modeling the spectra of the icy regolith using the radiative transfer methods described in Hapke (2008) with optical constants for Mauna Kea palagonite (Clancy et al., 1995) as a substitute for unknown Martian regolith optical constants. Our technique, which uses the radiative transfer methods described in Shkuratov et al. (1999), seeks to eliminate the uncertainty that stems from not knowing the composition of the Martian regolith by using observations of the Martian soil before and after the water-ice has sublimated away. We use observations of the desiccated regolith sample to estimate its complex index of refraction from its spectrum. This removes any a priori assumptions of Martian regolith composition, limiting our free parameters to the estimated real index of refraction of the dry regolith at one specific wavelength, ice grain size, and regolith porosity. We can then model mixtures of regolith and water-ice, fitting to the original icy spectrum to estimate the ice abundance. To constrain the uncertainties in this technique, we performed laboratory measurements of the spectra of known mixtures of water-ice and dry soils as well as those of soils after desiccation with controlled viewing geometries. Finally, we applied the technique to Phoenix Surface Stereo Imager observations and estimated water-ice abundances consistent with pore-fill in the near-surface ice. This abundance is consistent with atmospheric diffusion, which has implications to our understanding of the history of water-ice on Mars and the role of the regolith at high latitudes as a reservoir of atmospheric H2O.
Impact of a priori information on IASI ozone retrievals and trends
NASA Astrophysics Data System (ADS)
Barret, B.; Peiro, H.; Emili, E.; Le Flocgmoën, E.
2017-12-01
The IASI sensor documents atmospheric water vapor, temperature and composition since 2007. The Software for a Fast Retrieval of IASI Data (SOFRID) has been developped to retrieve O3 and CO profiles from IASI in near-real time on a global scale. Information content analyses have shown that IASI enables the quantification of O3 independently in the troposphere, the UTLS and the stratosphere. Validation studies have demonstrated that the daily to seasonal variability of tropospheric and UTLS O3 was well captured by IASI especially in the tropics. IASI-SOFRID retrievals have also been used to document the tropospheric composition during the Asian monsoon and participated to determine the O3 evolution during the 2008-2016 period in the framework of the TOAR project. Nevertheless, IASI-SOFRID O3 is biased high in the UTLS and in the tropical troposphere and the 8 years O3 trends from the different IASI products are significantly different from the O3 trends from UV-Vis satellite sensors (e.g. OMI)..SOFRID is based on the Optimal Estimation Method that requires a priori information to complete the information provided by the measured thermal infrared radiances. In SOFRID-O3 v1.5 used in TOAR the a priori consists of a single O3 profile and associated covariance matrix based on global O3 radiosoundings. Such a global a priori is characterized by a very large variabilty and does not represent our best kowledge of the O3 profile at a given time and location. Furthermore it is biased towards the northern hemisphere middle latitudes. We have therefore implemented the possibility to use dynamical a priori data in SOFRID and performed experiments using O3 climatological data and MLS O3 analyses. We will present O3 distributions and comparisons with O3 radiosoundings from the different SOFRID-O3 retrievals. We will in particular assess the impact of the use of different a priori data upon the O3 biases and trends during the IASI period.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Bounding filter - A simple solution to lack of exact a priori statistics.
NASA Technical Reports Server (NTRS)
Nahi, N. E.; Weiss, I. M.
1972-01-01
Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.
Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel
NASA Astrophysics Data System (ADS)
Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin
2016-10-01
An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.
Mars Dust and LETKF Data Assimilation of TES Observations
NASA Astrophysics Data System (ADS)
Greybush, S. J.; Hoffman, R. N.; Wilson, R.; Kang, J.; Zhao, Y.; Hoffman, M. J.; Kalnay, E.; Miyoshi, T.
2012-12-01
Simulation and prediction of dust storms remains one of the greatest challenges in Martian meteorology. Large-scale dust storms impact all Mars operations including spacecraft observations. What makes the difference between a regional event and a planet-encircling event? What are the predictability characteristics of these events and of the transition from regional to global? We examine the meteorology, including dustiness, in the Mars reanalysis created with the GFDL Mars Global Climate Model (MGCM) Local Ensemble Transform Kalman Filter (LETKF) data assimilation system (DAS). Characterizing the distribution and temporal evolution of dust in the Martian atmosphere is a considerable challenge. Spacecraft observations are sparse and have limitations in vertical coverage, dust physical properties are not well known, and model parameterizations of surface lifting have limited success in reproducing observed variability. Methods for generating a dust reanalysis begin with satellite inferred dust information in the form of column opacities, dust profile retrievals, or the original radiances. Opacities may be estimated from a formal retrieval of the satellite data or inferred through surface brightness temperatures. The opacities have been ingested via ad hoc adjustments to model tracer fields (Conrath vertical distributions, changes to the boundary layer dust only, etc.), but could also be assimilated by the LETKF or other advanced DAS. We will present dust distributions in the most recent version of the MGCM-LETKF Mars reanalysis. Current results are from two DASs, one assuming a fixed dust distribution and one using TES opacities and updating the boundary layer dust only. In these reanalyses, a full year of Thermal Emission Spectrometer (TES) temperature profiles have been assimilated. Since an accurate characterization of the sources and sinks of dust would greatly improve our understanding of the Martian dust cycle and its representation in numerical weather prediction models, we will examine two advanced DAS techniques that have been demonstrated in terrestrial DASs and could be applied to the problem -- surface dust flux estimation and estimating the surface parameters that control the source of dust (roughness, inventories). The surface dust flux method requires no a priori information about the fluxes, and uses only atmospheric observations. For the terrestrial CO2 problem, surface sources and sinks of CO2 have been estimated using only time-dependent measurements of atmospheric CO2, temperatures, and winds, and without a priori information on the surface fluxes. This scenario is very analogous to the case of Mars. On Mars we have only information on temperature and dust opacities at spacecraft overpass locations. Results for terrestrial CO2 and plans for Mars dust will be presented. However, to improve model parameterizations of dust lifting, we need to understand not only the planetary distribution of dust but also the evolution of its sources and sinks and their relation to meteorology. The surface parameters method assumes the physical properties have a persistence or damped persistence evolution equation. These are then treated as part of the model state vector in the LETKF. This approach is then analogous to the bias correction method used in LETKF to improve the atmospheric state estimation.
1989-12-01
known a priori or could be estimated in real time. To overcome these disadvantages, Kalman filtering methodology has been incorporated into the...operator G(fx,fv) =F((xy) After centering, the data is incorporated into the template using the exponential smoothi -g technique of Equa- tion (3-11). It
Validation of a Formula for Assigning Continuing Education Credit to Printed Home Study Courses
Hanson, Alan L.
2007-01-01
Objectives To reevaluate and validate the use of a formula for calculating the amount of continuing education credit to be awarded for printed home study courses. Methods Ten home study courses were selected for inclusion in a study to validate the formula, which is based on the number of words, number of final examination questions, and estimated difficulty level of the course. The amount of estimated credit calculated using the a priori formula was compared to the average amount of time required to complete each article based on pharmacists' self-reporting. Results A strong positive relationship between the amount of time required to complete the home study courses based on the a priori calculation and the times reported by pharmacists completing the 10 courses was found (p < 0.001). The correlation accounted for 86.2% of the total variability in the average pharmacist reported completion times (p < 0.001). Conclusions The formula offers an efficient and accurate means of determining the amount of continuing education credit that should be assigned to printed home study courses. PMID:19503705
Liu, Jie; Ying, Dongwen; Zhou, Ping
2014-01-01
Voluntary surface electromyogram (EMG) signals from neurological injury patients are often corrupted by involuntary background interference or spikes, imposing difficulties for myoelectric control. We present a novel framework to suppress involuntary background spikes during voluntary surface EMG recordings. The framework applies a Wiener filter to restore voluntary surface EMG signals based on tracking a priori signal to noise ratio (SNR) by using the decision-directed method. Semi-synthetic surface EMG signals contaminated by different levels of involuntary background spikes were constructed from a database of surface EMG recordings in a group of spinal cord injury subjects. After the processing, the onset detection of voluntary muscle activity was significantly improved against involuntary background spikes. The magnitude of voluntary surface EMG signals can also be reliably estimated for myoelectric control purpose. Compared with the previous sample entropy analysis for suppressing involuntary background spikes, the proposed framework is characterized by quick and simple implementation, making it more suitable for application in a myoelectric control system toward neurological injury rehabilitation. PMID:25443536
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1976-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
A priori predictions of the rotational constants for HC13N, HC15N, C5O
NASA Technical Reports Server (NTRS)
DeFrees, D. J.; McLean, A. D.
1989-01-01
Ab initio molecular orbital theory is used to estimate the rotational constant for several carbon-chain molecules that are candidates for discovery in interstellar space. These estimated rotational constants can be used in laboratory or astronomical searches for the molecules. The rotational constant for HC13N is estimated to be 0.1073 +/- 0.0002 GHz and its dipole moment 5.4 D. The rotational constant for HC15N is estimated to be 0.0724 GHz, with a somewhat larger uncertainty. The rotational constant of C5O is estimated to be 1.360 +/- 2% GHz and its dipole moment 4.4. D.
Sequential estimation of surface water mass changes from daily satellite gravimetry data
NASA Astrophysics Data System (ADS)
Ramillien, G. L.; Frappart, F.; Gratton, S.; Vasseur, X.
2015-03-01
We propose a recursive Kalman filtering approach to map regional spatio-temporal variations of terrestrial water mass over large continental areas, such as South America. Instead of correcting hydrology model outputs by the GRACE observations using a Kalman filter estimation strategy, regional 2-by-2 degree water mass solutions are constructed by integration of daily potential differences deduced from GRACE K-band range rate (KBRR) measurements. Recovery of regional water mass anomaly averages obtained by accumulation of information of daily noise-free simulated GRACE data shows that convergence is relatively fast and yields accurate solutions. In the case of cumulating real GRACE KBRR data contaminated by observational noise, the sequential method of step-by-step integration provides estimates of water mass variation for the period 2004-2011 by considering a set of suitable a priori error uncertainty parameters to stabilize the inversion. Spatial and temporal averages of the Kalman filter solutions over river basin surfaces are consistent with the ones computed using global monthly/10-day GRACE solutions from official providers CSR, GFZ and JPL. They are also highly correlated to in situ records of river discharges (70-95 %), especially for the Obidos station where the total outflow of the Amazon River is measured. The sparse daily coverage of the GRACE satellite tracks limits the time resolution of the regional Kalman filter solutions, and thus the detection of short-term hydrological events.
NASA Astrophysics Data System (ADS)
Hagemann, M. W.; Gleason, C. J.; Durand, M. T.
2017-11-01
The forthcoming Surface Water and Ocean Topography (SWOT) NASA satellite mission will measure water surface width, height, and slope of major rivers worldwide. The resulting data could provide an unprecedented account of river discharge at continental scales, but reliable methods need to be identified prior to launch. Here we present a novel algorithm for discharge estimation from only remotely sensed stream width, slope, and height at multiple locations along a mass-conserved river segment. The algorithm, termed the Bayesian AMHG-Manning (BAM) algorithm, implements a Bayesian formulation of streamflow uncertainty using a combination of Manning's equation and at-many-stations hydraulic geometry (AMHG). Bayesian methods provide a statistically defensible approach to generating discharge estimates in a physically underconstrained system but rely on prior distributions that quantify the a priori uncertainty of unknown quantities including discharge and hydraulic equation parameters. These were obtained from literature-reported values and from a USGS data set of acoustic Doppler current profiler (ADCP) measurements at USGS stream gauges. A data set of simulated widths, slopes, and heights from 19 rivers was used to evaluate the algorithms using a set of performance metrics. Results across the 19 rivers indicate an improvement in performance of BAM over previously tested methods and highlight a path forward in solving discharge estimation using solely satellite remote sensing.
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States
NASA Astrophysics Data System (ADS)
Mao, Y.; Li, Q.; Randerson, J. T.; Liou, K.
2011-12-01
We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.55°×0.66° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~20% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-July. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.
Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States
NASA Astrophysics Data System (ADS)
Mao, Y.; Li, Q.; Randerson, J. T.; CHEN, D.; Zhang, L.; Liou, K.
2012-12-01
We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.5°×0.667° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~50% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-September. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.
Timing of testing and treatment for asymptomatic diseases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kırkızlar, Eser; Faissol, Daniel M.; Griffin, Paul M.
2010-07-01
Many papers in the medical literature analyze the cost-effectiveness of screening for diseases by comparing a limited number of a priori testing policies under estimated problem parameters. However, this may be insufficient to determine the best timing of the tests or incorporate changes over time. In this paper, we develop and solve a Markov Decision Process (MDP) model for a simple class of asymptomatic diseases in order to provide the building blocks for analysis of a more general class of diseases. We provide a computationally efficient method for determining a cost-effective dynamic intervention strategy that takes into account (i) themore » results of the previous test for each individual and (ii) the change in the individual’s behavior based on awareness of the disease. We demonstrate the usefulness of the approach by applying the results to screening decisions for Hepatitis C (HCV) using medical data, and compare our findings to current HCV screening recommendations.« less
Clark, Michael D; Morris, Kenneth R; Tomassone, Maria Silvina
2017-09-12
We present a novel simulation-based investigation of the nucleation of nanodroplets from solution and from vapor. Nucleation is difficult to measure or model accurately, and predicting when nucleation should occur remains an open problem. Of specific interest is the "metastable limit", the observed concentration at which nucleation occurs spontaneously, which cannot currently be estimated a priori. To investigate the nucleation process, we employ gauge-cell Monte Carlo simulations to target spontaneous nucleation and measure thermodynamic properties of the system at nucleation. Our results reveal a widespread correlation over 5 orders of magnitude of solubilities, in which the metastable limit depends exclusively on solubility and the number density of generated nuclei. This three-way correlation is independent of other parameters, including intermolecular interactions, temperature, molecular structure, system composition, and the structure of the formed nuclei. Our results have great potential to further the prediction of nucleation events using easily measurable solute properties alone and to open new doors for further investigation.
Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun
2018-01-01
This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration. PMID:29596330
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
A quantitative visual dashboard to explore exposures to ...
The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori will quantitatively extrapolate single-point estimates of both exposure and internal dose for multiple exposure scenarios, factors, products, and pathways. Currently, EPA is investigating its usefulness in life cycle analysis, insofar as its ability to enhance exposure factors used in calculating characterization factors for human health. Presented at 2016 Annual ISES Meeting held in Utrecht, The Netherlands, from 9-13 October 2016.
Estimation of 3D reconstruction errors in a stereo-vision system
NASA Astrophysics Data System (ADS)
Belhaoua, A.; Kohler, S.; Hirsch, E.
2009-06-01
The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.
NASA Astrophysics Data System (ADS)
Minkwitz, David; van den Boogaart, Karl Gerald; Gerzen, Tatjana; Hoque, Mainul; Hernández-Pajares, Manuel
2016-11-01
The estimation of the ionospheric electron density by kriging is based on the optimization of a parametric measurement covariance model. First, the extension of kriging with slant total electron content (STEC) measurements based on a spatial covariance to kriging with a spatial-temporal covariance model, assimilating STEC data of a sliding window, is presented. Secondly, a novel tomography approach by gradient-enhanced kriging (GEK) is developed. Beyond the ingestion of STEC measurements, GEK assimilates ionosonde characteristics, providing peak electron density measurements as well as gradient information. Both approaches deploy the 3-D electron density model NeQuick as a priori information and estimate the covariance parameter vector within a maximum likelihood estimation for the dedicated tomography time stamp. The methods are validated in the European region for two periods covering quiet and active ionospheric conditions. The kriging with spatial and spatial-temporal covariance model is analysed regarding its capability to reproduce STEC, differential STEC and foF2. Therefore, the estimates are compared to the NeQuick model results, the 2-D TEC maps of the International GNSS Service and the DLR's Ionospheric Monitoring and Prediction Center, and in the case of foF2 to two independent ionosonde stations. Moreover, simulated STEC and ionosonde measurements are used to investigate the electron density profiles estimated by the GEK in comparison to a kriging with STEC only. The results indicate a crucial improvement in the initial guess by the developed methods and point out the potential compensation for a bias in the peak height hmF2 by means of GEK.
NASA Astrophysics Data System (ADS)
Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan
2016-02-01
In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.
FAMA: Fast Automatic MOOG Analysis
NASA Astrophysics Data System (ADS)
Magrini, Laura; Randich, Sofia; Friel, Eileen; Spina, Lorenzo; Jacobson, Heather; Cantat-Gaudin, Tristan; Donati, Paolo; Baglioni, Roberto; Maiorca, Enrico; Bragaglia, Angela; Sordo, Rosanna; Vallenari, Antonella
2014-02-01
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
Singh, Aman P; Maass, Katie F; Betts, Alison M; Wittrup, K Dane; Kulkarni, Chethana; King, Lindsay E; Khot, Antari; Shah, Dhaval K
2016-07-01
A mathematical model capable of accurately characterizing intracellular disposition of ADCs is essential for a priori predicting unconjugated drug concentrations inside the tumor. Towards this goal, the objectives of this manuscript were to: (1) evolve previously published cellular disposition model of ADC with more intracellular details to characterize the disposition of T-DM1 in different HER2 expressing cell lines, (2) integrate the improved cellular model with the ADC tumor disposition model to a priori predict DM1 concentrations in a preclinical tumor model, and (3) identify prominent pathways and sensitive parameters associated with intracellular activation of ADCs. The cellular disposition model was augmented by incorporating intracellular ADC degradation and passive diffusion of unconjugated drug across tumor cells. Different biomeasures and chemomeasures for T-DM1, quantified in the companion manuscript, were incorporated into the modified model of ADC to characterize in vitro pharmacokinetics of T-DM1 in three HER2+ cell lines. When the cellular model was integrated with the tumor disposition model, the model was able to a priori predict tumor DM1 concentrations in xenograft mice. Pathway analysis suggested different contribution of antigen-mediated and passive diffusion pathways for intracellular unconjugated drug exposure between in vitro and in vivo systems. Global and local sensitivity analyses revealed that non-specific deconjugation and passive diffusion of the drug across tumor cell membrane are key parameters for drug exposure inside a cell. Finally, a systems pharmacokinetic model for intracellular processing of ADCs has been proposed to highlight our current understanding about the determinants of ADC activation inside a cell.
NASA Astrophysics Data System (ADS)
Thompson, R. L.; Gerbig, C.; Roedenbeck, C.; Heimann, M.
2009-04-01
The nitrous oxide (N2O) mixing ratio has been increasing in the atmosphere since the industrial revolution, from 270 ppb in 1750 to 320 ppb in 2007 with a steady growth rate of around 0.26% since the early 1980's. The increase in N2O is worrisome for two main reasons. First, it is a greenhouse gas; this means that its atmospheric increase translates to an enhancement in radiative forcing of 0.16 ± 0.02 Wm-2 making it currently the fourth most important long-lived greenhouse gas and is predicted to soon overtake CFC's to become the third most important. Second, it plays an important role in stratospheric ozone chemistry. Human activities are the primary cause of the atmospheric N2O increase. The largest anthropogenic source of N2O is from the use of N-fertilizers in agriculture but fossil fuel combustion and industrial processes, such as adipic and nitric acid production, are also important. We present a Bayesian inversion approach for estimating N2O fluxes over central and western Europe using high frequency in-situ concentration data from the Ochsenkopf tall tower (50 °01â²N, 11 °48â², 1022 masl). For the inversion, we employ a Lagrangian-type transport model, STILT, which provides source-receptor relationships at 10 km using ECMWF meteorological data. The a priori flux estimates used were from IER, for anthropogenic, and GEIA, for natural fluxes. N2O fluxes were retrieved monthly at 2 x 2 degree spatial resolution for 2007. The retrieved N2O fluxes showed significantly more spatial heterogeneity than in the a priori field and considerable seasonal variability. The timing of peak emissions was different for different regions but in general the months with the strongest emissions were May and August. Overall, the retrieved flux (anthropogenic and natural) was lower than in the a priori field.
Sha, Zhichao; Liu, Zhengmeng; Huang, Zhitao; Zhou, Yiyu
2013-08-29
This paper addresses the problem of direction-of-arrival (DOA) estimation of multiple wideband coherent chirp signals, and a new method is proposed. The new method is based on signal component analysis of the array output covariance, instead of the complicated time-frequency analysis used in previous literatures, and thus is more compact and effectively avoids possible signal energy loss during the hyper-processes. Moreover, the a priori information of signal number is no longer a necessity for DOA estimation in the new method. Simulation results demonstrate the performance superiority of the new method over previous ones.
"In silico" mechanistic studies as predictive tools in microwave-assisted organic synthesis.
Rodriguez, A M; Prieto, P; de la Hoz, A; Díaz-Ortiz, A
2011-04-07
Computational calculations can be used as a predictive tool in Microwave-Assisted Organic Synthesis (MAOS). A DFT study on Intramolecular Diels-Alder reactions (IMDA) indicated that the activation energy of the reaction and the polarity of the stationary points are two fundamental parameters to determine "a priori" if a reaction can be improved by using microwave irradiation.
ERIC Educational Resources Information Center
Mare, Robert D.; Mason, William M.
An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…
Extending Wireless Rechargeable Sensor Network Life without Full Knowledge.
Najeeb, Najeeb W; Detweiler, Carrick
2017-07-17
When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes' power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN. We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values.
Extending Wireless Rechargeable Sensor Network Life without Full Knowledge
Najeeb, Najeeb W.; Detweiler, Carrick
2017-01-01
When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes’ power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN. We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values. PMID:28714936
ERIC Educational Resources Information Center
Kretschmann, Julia; Vock, Miriam; Lüdtke, Oliver
2014-01-01
Using German data, we examined the effects of one specific type of acceleration--grade skipping--on academic performance. Prior research on the effects of acceleration has suffered from methodological restrictions, especially due to a lack of appropriate comparison groups and a priori measurements. For this reason, propensity score matching was…
A Cross-National Study of the Relationship between Elderly Suicide Rates and Urbanization
ERIC Educational Resources Information Center
Shah, Ajit
2008-01-01
There is mixed evidence of a relationship between suicide rates in the general population and urbanization, and a paucity of studies examining this relationship in the elderly. A cross-national study with curve estimation regression model analysis, was undertaken to examine the a priori hypothesis that the relationship between elderly suicide…
Effects of measurement unobservability on neural extended Kalman filter tracking
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kramer, Kathleen A.
2009-05-01
An important component of tracking fusion systems is the ability to fuse various sensors into a coherent picture of the scene. When multiple sensor systems are being used in an operational setting, the types of data vary. A significant but often overlooked concern of multiple sensors is the incorporation of measurements that are unobservable. An unobservable measurement is one that may provide information about the state, but cannot recreate a full target state. A line of bearing measurement, for example, cannot provide complete position information. Often, such measurements come from passive sensors such as a passive sonar array or an electronic surveillance measure (ESM) system. Unobservable measurements will, over time, result in the measurement uncertainty to grow without bound. While some tracking implementations have triggers to protect against the detrimental effects, many maneuver tracking algorithms avoid discussing this implementation issue. One maneuver tracking technique is the neural extended Kalman filter (NEKF). The NEKF is an adaptive estimation algorithm that estimates the target track as it trains a neural network on line to reduce the error between the a priori target motion model and the actual target dynamics. The weights of neural network are trained in a similar method to the state estimation/parameter estimation Kalman filter techniques. The NEKF has been shown to improve target tracking accuracy through maneuvers and has been use to predict target behavior using the new model that consists of the a priori model and the neural network. The key to the on-line adaptation of the NEKF is the fact that the neural network is trained using the same residuals as the Kalman filter for the tracker. The neural network weights are treated as augmented states to the target track. Through the state-coupling function, the weights are coupled to the target states. Thus, if the measurements cause the states of the target track to be unobservable, then the weights of the neural network have unobservable modes as well. In recent analysis, the NEKF was shown to have a significantly larger growth in the eigenvalues of the error covariance matrix than the standard EKF tracker when the measurements were purely bearings-only. This caused detrimental effects to the ability of the NEKF to model the target dynamics. In this work, the analysis is expanded to determine the detrimental effects of bearings-only measurements of various uncertainties on the performance of the NEKF when these unobservable measurements are interlaced with completely observable measurements. This analysis provides the ability to put implementation limitations on the NEKF when bearings-only sensors are present.
NASA Technical Reports Server (NTRS)
Chaikovsky, A.; Dubovik, O.; Holben, Brent N.; Bril, A.; Goloub, P.; Tanre, D.; Pappalardo, G.; Wandinger, U.; Chaikovskaya, L.; Denisov, S.;
2015-01-01
This paper presents a detailed description of LIRIC (LIdar-Radiometer Inversion Code)algorithm for simultaneous processing of coincident lidar and radiometric (sun photometric) observations for the retrieval of the aerosol concentration vertical profiles. As the lidar radiometric input data we use measurements from European Aerosol Re-search Lidar Network (EARLINET) lidars and collocated sun-photometers of Aerosol Robotic Network (AERONET). The LIRIC data processing provides sequential inversion of the combined lidar and radiometric data by the estimations of column-integrated aerosol parameters from radiometric measurements followed by the retrieval of height-dependent concentrations of fine and coarse aerosols from lidar signals using integrated column characteristics of aerosol layer as a priori constraints. The use of polarized lidar observations allows us to discriminate between spherical and non-spherical particles of the coarse aerosol mode. The LIRIC software package was implemented and tested at a number of EARLINET stations. Inter-comparison of the LIRIC-based aerosol retrievals was performed for the observations by seven EARLNET lidars in Leipzig, Germany on 25 May 2009. We found close agreement between the aerosol parameters derived from different lidars that supports high robustness of the LIRIC algorithm. The sensitivity of the retrieval results to the possible reduction of the available observation data is also discussed.
Filter design for the detection of compact sources based on the Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.
2005-05-01
This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.
Tane, Moana P; Hefler, Marita; Thomas, David P
2018-04-01
Smoking prevalence estimated between 65% and 84% has been reported among the Yolŋu peoples of East Arnhem Land, Northern Territory. We report on findings of an evaluation of the Yaka Ŋarali' Tackling Indigenous Smoking program in East Arnhem Land. Qualitative interviews with Yolŋu (N = 23) and non-Yolŋu (N = 7) informants were conducted in seven communities between June 2014 and September 2015, with the support of Cultural Mentors, in homeland communities throughout East Arnhem Land. The data was coded using NVivo software, analysed line-by-line and categorised by the researcher (MT) under three a priori categories established as evaluation parameters. In addition, the meanings of ŋarali' and Yolŋu cultural obligations to ŋarali' were analysed using an inductive process. Data were coded under three a priori themes: Yolŋu trying to quit smoking (interest in quitting, access to support); the Yaka Ŋarali program (efficacy and recognition); Yolŋu workforce (roles and responsibilities). Yolŋu informants, including Elders and leaders, both smokers and non-smokers uniformly acknowledged the deep cultural and traditional connection with ŋarali' attributing this relationship with its introduction by the Macassans and its subsequent adoption into ceremony. Given the strong cultural and traditional connection to ŋarali', care must be taken to ensure tobacco control measures maintain congruence with local values and expectations. SO WHAT?: Tailored, localised programs, developed in consultation with communities, Elders and leaders are needed to respect and accommodate the tight connection that the Yolŋu have with ŋarali', maintained over hundreds of years. © 2017 Australian Health Promotion Association.
NASA Astrophysics Data System (ADS)
Jung, Y.; Kim, J.; Kim, W.; Boesch, H.; Yoshida, Y.; Cho, C.; Lee, H.; Goo, T. Y.
2016-12-01
The Greenhouse Gases Observing SATellite (GOSAT) is the first satellite dedicated to measure atmospheric CO2 concentrations from space that can able to improve our knowledge about carbon cycle. Several studies have performed to develop the CO2 retrieval algorithms using GOSAT measurements, but limitations in spatial coverage and uncertainties due to aerosols and thin cirrus clouds are still remained as a problem for monitoring CO2 concentration globally. In this study, we develop the Yonsei CArbon Retrieval (YCAR) algorithm based on optimal estimation method to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) with optimized a priori CO2 profiles and aerosol models over East Asia. In previous studies, the aerosol optical properties (AOP) and the aerosol top height used to cause significant errors in retrieved XCO2 up to 2.5 ppm. Since this bias comes from a rough assumption of aerosol information in the forward model used in CO2 retrieval process, the YCAR algorithm improves the process to take into account AOPs as well as aerosol vertical distribution; total AOD and the fine mode fraction (FMF) are obtained from the ground-based measurements closely located, and other parameters are obtained from a priori information. Comparing to ground-based XCO2 measurements, the YCAR XCO2 product has a bias of 0.59±0.48 ppm and 2.16±0.87 ppm at Saga and Tsukuba sites, respectively, showing lower biases and higher correlations rather than the GOSAT standard products. These results reveal that considering better aerosol information can improve the accuracy of CO2 retrieval algorithm and provide more useful XCO2 information with reduced uncertainties.
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
NASA Astrophysics Data System (ADS)
Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki
We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Braun, Jean; Gemignani, Lorenzo; van der Beek, Peter
2018-03-01
One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo-Siang-Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo-Siang-Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of any given catchment to the river distribution. The method additionally allows comparing modern erosion rates to long-term exhumation rates. We provide a simple implementation of the method in Python code within a Jupyter Notebook that includes the data used in this paper for illustration purposes.
NASA Astrophysics Data System (ADS)
Bielik, M.; Vozar, J.; Hegedus, E.; Celebration Working Group
2003-04-01
The contribution informs about the preliminary results that relate to the first arrival p-wave seismic tomographic processing of data measured along the profiles CEL01, CEL04, CEL05, CEL06, CEL09 and CEL11. These profiles were measured in a framework of the seismic project called CELEBRATION 2000. Data acquisition and geometric parameters of the processed profiles, tomographic processing’s principle, particular processing steps and program parameters are described. Characteristic data (shot points, geophone points, total length of profiles, for all profiles, sampling, sensors and record lengths) of observation profiles are given. The fast program package developed by C. Zelt was applied for tomographic velocity inversion. This process consists of several steps. First step is a creation of the starting velocity field for which the calculated arrival times are modelled by the method of finite differences. The next step is minimization of differences between the measured and modelled arrival time till the deviation is small. Elimination of equivalency problem by including a priori information in the starting velocity field was done too. A priori information consists of the depth to the pre-Tertiary basement, estimation of its overlying sedimentary velocity from well-logging and or other seismic velocity data, etc. After checking the reciprocal times, pickings were corrected. The final result of the processing is a reliable travel time curve set considering the reciprocal times. We carried out picking of travel time curves, enhancement of signal-to-noise ratio on the seismograms using the program system of PROMAX. Tomographic inversion was carried out by so called 3D/2D procedure taking into account 3D wave propagation. It means that a corridor along the profile, which contains the outlying shot points and geophone points as well was defined and we carried out 3D processing within this corridor. The preliminary results indicate the seismic anomalous zones within the crust and the uppermost part of the upper mantle in the area consists of the Western Carpathians, the North European platform, the Pannonian basin and the Bohemian Massif.
Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P
2016-03-01
Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors' analyses showed that a ∼97% model convergence was systematically observed with no-a priori information. Varying the model geometry resolution showed no significant accuracy improvements. The GPU-based forward model enabled the inverse analysis to be completed within 10-70 min. Using a priori information about the underlying anatomy, the computation time decreased by as much as 50%, while accuracy improved from 96.81% to 98.26%. The use of FSA was observed to allow the iterative estimation methodology to converge more precisely. By utilizing a forward iterative approach to solve the inverse elasticity problem, this work indicates the feasibility and potential of the fast reconstruction of breast tissue elasticity using supine/prone patient postures.
2018-01-01
The genus Liolaemus comprises more than 260 species and can be divided in two subgenera: Eulaemus and Liolaemus sensu stricto. In this paper, we present a phylogenetic analysis, divergence times, and ancestral distribution ranges of the Liolaemus alticolor-bibronii group (Liolaemus sensu stricto subgenus). We inferred a total evidence phylogeny combining molecular (Cytb and 12S genes) and morphological characters using Maximum Parsimony and Bayesian Inference. Divergence times were calculated using Bayesian MCMC with an uncorrelated lognormal distributed relaxed clock, calibrated with a fossil record. Ancestral ranges were estimated using the Dispersal-Extinction-Cladogenesis (DEC-Lagrange). Effects of some a priori parameters of DEC were also tested. Distribution ranged from central Perú to southern Argentina, including areas at sea level up to the high Andes. The L. alticolor-bibronii group was recovered as monophyletic, formed by two clades: L. walkeri and L. gracilis, the latter can be split in two groups. Additionally, many species candidates were recognized. We estimate that the L. alticolor-bibronii group diversified 14.5 Myr ago, during the Middle Miocene. Our results suggest that the ancestor of the Liolaemus alticolor-bibronii group was distributed in a wide area including Patagonia and Puna highlands. The speciation pattern follows the South-North Diversification Hypothesis, following the Andean uplift. PMID:29479502
NASA Astrophysics Data System (ADS)
Arellano, A. F., Jr.; Tang, W.
2017-12-01
Assimilating observational data of chemical constituents into a modeling system is a powerful approach in assessing changes in atmospheric composition and estimating associated emissions. However, the results of such chemical data assimilation (DA) experiments are largely subject to various key factors such as: a) a priori information, b) error specification and representation, and c) structural biases in the modeling system. Here we investigate the sensitivity of an ensemble-based data assimilation state and emission estimates to these key factors. We focus on investigating the assimilation performance of the Community Earth System Model (CESM)/CAM-Chem with the Data Assimilation Research Testbed (DART) in representing biomass burning plumes in the Amazonia during the 2008 fire season. We conduct the following ensemble DA MOPITT CO experiments: 1) use of monthly-average NCAR's FINN surface fire emissionss, 2) use of daily FINN surface fire emissions, 3) use of daily FINN emissions with climatological injection heights, and 4) use of perturbed FINN emission parameters to represent not only the uncertainties in combustion activity but also in combustion efficiency. We show key diagnostics of assimilation performance for these experiments and verify with available ground-based and aircraft-based measurements.
McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco
2016-12-08
Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.
Du, Jialu; Hu, Xin; Liu, Hongbo; Chen, C L Philip
2015-11-01
This paper develops an adaptive robust output feedback control scheme for dynamically positioned ships with unavailable velocities and unknown dynamic parameters in an unknown time-variant disturbance environment. The controller is designed by incorporating the high-gain observer and radial basis function (RBF) neural networks in vectorial backstepping method. The high-gain observer provides the estimations of the ship position and heading as well as velocities. The RBF neural networks are employed to compensate for the uncertainties of ship dynamics. The adaptive laws incorporating a leakage term are designed to estimate the weights of RBF neural networks and the bounds of unknown time-variant environmental disturbances. In contrast to the existing results of dynamic positioning (DP) controllers, the proposed control scheme relies only on the ship position and heading measurements and does not require a priori knowledge of the ship dynamics and external disturbances. By means of Lyapunov functions, it is theoretically proved that our output feedback controller can control a ship's position and heading to the arbitrarily small neighborhood of the desired target values while guaranteeing that all signals in the closed-loop DP control system are uniformly ultimately bounded. Finally, simulations involving two ships are carried out, and simulation results demonstrate the effectiveness of the proposed control scheme.
Optimal Scaling of Aftershock Zones using Ground Motion Forecasts
NASA Astrophysics Data System (ADS)
Wilson, John Max; Yoder, Mark R.; Rundle, John B.
2018-02-01
The spatial distribution of aftershocks following major earthquakes has received significant attention due to the shaking hazard these events pose for structures and populations in the affected region. Forecasting the spatial distribution of aftershock events is an important part of the estimation of future seismic hazard. A simple spatial shape for the zone of activity has often been assumed in the form of an ellipse having semimajor axis to semiminor axis ratio of 2.0. However, since an important application of these calculations is the estimation of ground shaking hazard, an effective criterion for forecasting future aftershock impacts is to use ground motion prediction equations (GMPEs) in addition to the more usual approach of using epicentral or hypocentral locations. Based on these ideas, we present an aftershock model that uses self-similarity and scaling relations to constrain parameters as an option for such hazard assessment. We fit the spatial aspect ratio to previous earthquake sequences in the studied regions, and demonstrate the effect of the fitting on the likelihood of post-disaster ground motion forecasts for eighteen recent large earthquakes. We find that the forecasts in most geographic regions studied benefit from this optimization technique, while some are better suited to the use of the a priori aspect ratio.
NASA Astrophysics Data System (ADS)
Hueneke, Tilman; Grossmann, Katja; Knecht, Matthias; Raecke, Rasmus; Stutz, Jochen; Werner, Bodo; Pfeilsticker, Klaus
2016-04-01
Changing atmospheric conditions during DOAS measurements from fast moving aircraft platforms pose a challenge for trace gas retrievals. Traditional inversion techniques to retrieve trace gas concentrations from limb scattered UV/vis spectroscopy, like optimal estimation, require a-priori information on Mie extinction (e.g., aerosol concentration and cloud cover) and albedo, which determine the atmospheric radiative transfer. In contrast to satellite applications, cloud filters can not be applied because they would strongly reduce the usable amount of expensively gathered measurement data. In contrast to ground-based MAX-DOAS applications, an aerosol retrieval based on O4 is not able to constrain the radiative transfer in air-borne applications due to the rapidly decreasing amount of O4 with altitude. Furthermore, the assumption of a constant cloud cover is not valid for fast moving aircrafts, thus requiring 2D or even 3D treatment of the radiative transfer. Therefore, traditional techniques are not applicable for most of the data gathered by fast moving aircraft platforms. In order to circumvent these limitations, we have been developing the so-called X-gas scaling method. By utilising a proxy gas X (e.g. O3, O4, …), whose concentration is either a priori known or simultaneously in-situ measured as well as remotely measured, an effective absorption length for the target gas is inferred. In this presentation, we discuss the strengths and weaknesses of the novel approach along with some sample cases. A particular strength of the X-gas scaling method is its insensitivity towards the aerosol abundance and cloud cover as well as wavelength dependent effects, whereas its sensitivity towards the profiles of both gases requires a priori information on their shapes.
Fitting a Two-Component Scattering Model to Polarimetric SAR Data from Forests
NASA Technical Reports Server (NTRS)
Freeman, Anthony
2007-01-01
Two simple scattering mechanisms are fitted to polarimetric synthetic aperture radar (SAR) observations of forests. The mechanisms are canopy scatter from a reciprocal medium with azimuthal symmetry and a ground scatter term that can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, which is seen through a layer of vertically oriented scatterers. The model is shown to represent the behavior of polarimetric backscatter from a tropical forest and two temperate forest sites by applying it to data from the National Aeronautic and Space Agency/Jet Propulsion Laboratory's Airborne SAR (AIRSAR) system. Scattering contributions from the two basic scattering mechanisms are estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. This model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem. The model is used to develop an understanding of the ground-trunk double-bounce scattering that is present in the data, which is seen to vary considerably as a function of incidence angle. Two parameters in the model fit appear to exhibit sensitivity to vegetation canopy structure, which is worth further exploration. Results from the model fit for the ground scattering term are compared with estimates from a forward model and shown to be in good agreement. The behavior of the scattering from the ground-trunk interaction is consistent with the presence of a pseudo-Brewster angle effect for the air-trunk scattering interface. If the Brewster angle is known, it is possible to directly estimate the real part of the dielectric constant of the trunks, a key variable in forward modeling of backscatter from forests. It is also shown how, with a priori knowledge of the forest height, an estimate for the attenuation coefficient of the canopy can be obtained directly from the multi-incidence-angle polarimetric observations. This attenuation coefficient is another key variable in forward models and is generally related to the canopy density.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.
2005-01-01
In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.
Calibrated Multiple Event Relocations of the Central and Eastern United States
NASA Astrophysics Data System (ADS)
Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.
2015-12-01
Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964-1973) ground truth events from a verification database. We (1) evaluate our ability to improve our location estimations, (2) use improved locations to evaluate Earth structure in seismically active regions, and (3) examine improvements to the estimated locations of historic large magnitude earthquakes.
NASA Astrophysics Data System (ADS)
Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.
2012-12-01
Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.
DAISY: a new software tool to test global identifiability of biological and physiological systems.
Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina
2007-10-01
A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
NASA Astrophysics Data System (ADS)
Rainaud, Jean-François; Clochard, Vincent; Delépine, Nicolas; Crabié, Thomas; Poudret, Mathieu; Perrin, Michel; Klein, Emmanuel
2018-07-01
Accurate reservoir characterization is needed all along the development of an oil and gas field study. It helps building 3D numerical reservoir simulation models for estimating the original oil and gas volumes in place and for simulating fluid flow behaviors. At a later stage of the field development, reservoir characterization can also help deciding which recovery techniques need to be used for fluids extraction. In complex media, such as faulted reservoirs, flow behavior predictions within volumes close to faults can be a very challenging issue. During the development plan, it is necessary to determine which types of communication exist between faults or which potential barriers exist for fluid flows. The solving of these issues rests on accurate fault characterization. In most cases, faults are not preserved along reservoir characterization workflows. The memory of the interpreted faults from seismic is not kept during seismic inversion and further interpretation of the result. The goal of our study is at first to integrate a 3D fault network as a priori information into a model-based stratigraphic inversion procedure. Secondly, we apply our methodology on a well-known oil and gas case study over a typical North Sea field (UK Northern North Sea) in order to demonstrate its added value for determining reservoir properties. More precisely, the a priori model is composed of several geological units populated by physical attributes, they are extrapolated from well log data following the deposition mode, but usually a priori model building methods respect neither the 3D fault geometry nor the stratification dips on the fault sides. We address this difficulty by applying an efficient flattening method for each stratigraphic unit in our workflow. Even before seismic inversion, the obtained stratigraphic model has been directly used to model synthetic seismic on our case study. Comparisons between synthetic seismic obtained from our 3D fault network model give much lower residuals than with a "basic" stratigraphic model. Finally, we apply our model-based inversion considering both faulted and non-faulted a priori models. By comparing the rock impedances results obtain in the two cases, we can see a better delineation of the Brent-reservoir compartments by using the 3D faulted a priori model built with our method.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
A new fictitious domain approach for Stokes equation
NASA Astrophysics Data System (ADS)
Yang, Min
2017-10-01
The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Estimating direction in brain-behavior interactions: Proactive and reactive brain states in driving.
Garcia, Javier O; Brooks, Justin; Kerick, Scott; Johnson, Tony; Mullen, Tim R; Vettel, Jean M
2017-04-15
Conventional neuroimaging analyses have ascribed function to particular brain regions, exploiting the power of the subtraction technique in fMRI and event-related potential analyses in EEG. Moving beyond this convention, many researchers have begun exploring network-based neurodynamics and coordination between brain regions as a function of behavioral parameters or environmental statistics; however, most approaches average evoked activity across the experimental session to study task-dependent networks. Here, we examined on-going oscillatory activity as measured with EEG and use a methodology to estimate directionality in brain-behavior interactions. After source reconstruction, activity within specific frequency bands (delta: 2-3Hz; theta: 4-7Hz; alpha: 8-12Hz; beta: 13-25Hz) in a priori regions of interest was linked to continuous behavioral measurements, and we used a predictive filtering scheme to estimate the asymmetry between brain-to-behavior and behavior-to-brain prediction using a variant of Granger causality. We applied this approach to a simulated driving task and examined directed relationships between brain activity and continuous driving performance (steering behavior or vehicle heading error). Our results indicated that two neuro-behavioral states may be explored with this methodology: a Proactive brain state that actively plans the response to the sensory information and is characterized by delta-beta activity, and a Reactive brain state that processes incoming information and reacts to environmental statistics primarily within the alpha band. Published by Elsevier Inc.
Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.
Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya
2013-03-01
We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.
2006-01-01
Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800 °C.
NASA Technical Reports Server (NTRS)
Challa, M. S.; Natanson, G. A.; Baker, D. F.; Deutschmann, J. K.
1994-01-01
This paper describes real-time attitude determination results for the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX), a gyroless spacecraft, using a Kalman filter/Euler equation approach denoted the real-time sequential filter (RTSF). The RTSF is an extended Kalman filter whose state vector includes the attitude quaternion and corrections to the rates, which are modeled as Markov processes with small time constants. The rate corrections impart a significant robustness to the RTSF against errors in modeling the environmental and control torques, as well as errors in the initial attitude and rates, while maintaining a small state vector. SAMPLEX flight data from various mission phases are used to demonstrate the robustness of the RTSF against a priori attitude and rate errors of up to 90 deg and 0.5 deg/sec, respectively, as well as a sensitivity of 0.0003 deg/sec in estimating rate corrections in torque computations. In contrast, it is shown that the RTSF attitude estimates without the rate corrections can degrade rapidly. RTSF advantages over single-frame attitude determination algorithms are also demonstrated through (1) substantial improvements in attitude solutions during sun-magnetic field coalignment and (2) magnetic-field-only attitude and rate estimation during the spacecraft's sun-acquisition mode. A robust magnetometer-only attitude-and-rate determination method is also developed to provide for the contingency when both sun data as well as a priori knowledge of the spacecraft state are unavailable. This method includes a deterministic algorithm used to initialize the RTSF with coarse estimates of the spacecraft attitude and rates. The combined algorithm has been found effective, yielding accuracies of 1.5 deg in attitude and 0.01 deg/sec in the rates and convergence times as little as 400 sec.
Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants.
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2015-09-21
Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient's anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant's RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B(1)(+) field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient's anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.
Convex optimization of MRI exposure for mitigation of RF-heating from active medical implants
NASA Astrophysics Data System (ADS)
Córcoles, Juan; Zastrow, Earl; Kuster, Niels
2015-09-01
Local RF-heating of elongated medical implants during magnetic resonance imaging (MRI) may pose a significant health risk to patients. The actual patient risk depends on various parameters including RF magnetic field strength and frequency, MR coil design, patient’s anatomy, posture, and imaging position, implant location, RF coupling efficiency of the implant, and the bio-physiological responses associated with the induced local heating. We present three constrained convex optimization strategies that incorporate the implant’s RF-heating characteristics, for the reduction of local heating of medical implants during MRI. The study emphasizes the complementary performances of the different formulations. The analysis demonstrates that RF-induced heating of elongated metallic medical implants can be carefully controlled and balanced against MRI quality. A reduction of heating of up to 25 dB can be achieved at the cost of reduced uniformity in the magnitude of the B1+ field of less than 5%. The current formulations incorporate a priori knowledge of clinically-specific parameters, which is assumed to be available. Before these techniques can be applied practically in the broader clinical context, further investigations are needed to determine whether reduced access to a priori knowledge regarding, e.g. the patient’s anatomy, implant routing, RF-transmitter, and RF-implant coupling, can be accepted within reasonable levels of uncertainty.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Estimating the impact of birth control on fertility rate in sub-Saharan Africa.
Ijaiya, Gafar T; Raheem, Usman A; Olatinwo, Abdulwaheed O; Ijaiya, Munir-Deen A; Ijaiya, Mukaila A
2009-12-01
Using a cross-country data drawn from 40 countries and a multiple regression analysis, this paper examines the impact of birth control devices on the rate of fertility in sub-Saharan Africa. Our a-priori expectations are that the more women used birth control devices, the less will be the fertility rate in sub-Saharan Africa. The result obtained from the study indicates that except for withdrawal method that fall contrary to our expectation other variables (methods) like the use of pills, injection, intra uterine device (IUD), condom/diaphragm and cervical cap, female sterilization and periodic abstinence/rhythm fulfilled our a-priori expectations. These results notwithstanding, the paper suggests measures, such as the need for massive enlightenment campaign on the benefit of these birth control devices, the frequent checking of the potency of the devices and good governance in the delivery of the devices
Reassignment of scattered emission photons in multifocal multiphoton microscopy.
Cha, Jae Won; Singh, Vijay Raj; Kim, Ki Hean; Subramanian, Jaichandar; Peng, Qiwen; Yu, Hanry; Nedivi, Elly; So, Peter T C
2014-06-05
Multifocal multiphoton microscopy (MMM) achieves fast imaging by simultaneously scanning multiple foci across different regions of specimen. The use of imaging detectors in MMM, such as CCD or CMOS, results in degradation of image signal-to-noise-ratio (SNR) due to the scattering of emitted photons. SNR can be partly recovered using multianode photomultiplier tubes (MAPMT). In this design, however, emission photons scattered to neighbor anodes are encoded by the foci scan location resulting in ghost images. The crosstalk between different anodes is currently measured a priori, which is cumbersome as it depends specimen properties. Here, we present the photon reassignment method for MMM, established based on the maximum likelihood (ML) estimation, for quantification of crosstalk between the anodes of MAPMT without a priori measurement. The method provides the reassignment of the photons generated by the ghost images to the original spatial location thus increases the SNR of the final reconstructed image.
NASA Astrophysics Data System (ADS)
Gleason, Colin J.; Smith, Laurence C.; Lee, Jinny
2014-12-01
Knowledge of river discharge is critically important for water resource management, climate modeling, and improved understanding of the global water cycle, yet discharge is poorly known in much of the world. Remote sensing holds promise to mitigate this gap, yet current approaches for quantitative retrievals of river discharge require in situ calibration or a priori knowledge of river hydraulics, limiting their utility in unmonitored regions. Recently, Gleason and Smith (2014) demonstrated discharge retrievals within 20-30% of in situ observations solely from Landsat TM satellite images through discovery of a river-specific geomorphic scaling phenomenon termed at-many-stations hydraulic geometry (AMHG). This paper advances the AMHG discharge retrieval approach via additional parameter optimizations and validation on 34 gauged rivers spanning a diverse range of geomorphic and climatic settings. Sensitivity experiments reveal that discharge retrieval accuracy varies with river morphology, reach averaging procedure, and optimization parameters. Quality of remotely sensed river flow widths is also important. Recommended best practices include a proposed global parameter set for use when a priori information is unavailable. Using this global parameterization, AMHG discharge retrievals are successful for most investigated river morphologies (median RRMSE 33% of in situ gauge observations), except braided rivers (median RRMSE 74%), rivers having low at-a-station hydraulic geometry b exponents (reach-averaged b < 0.1, median RRMSE 86%), and arid rivers having extreme discharge variability (median RRMSE > 1000%). Excluding such environments, 26-41% RRMSE agreement between AMHG discharge retrievals and in situ gauge observations suggests AMHG can meaningfully address global discharge knowledge gaps solely from repeat satellite imagery.
NASA Technical Reports Server (NTRS)
Todorovic-Juchniewicz, Bozenna; Sitarski, Grzegorz
1992-01-01
To improve the orbits, all the positional observations of the comets were collected. The observations were selected and weighted according to objective mathematical criteria and the mean residuals a priori were calculated for both comets. We took into account nongravitational effects in the comets' motion using Marsden's method applied in two ways: either determining the three constant parameters, A(sub 1), A(sub 2), A(sub 3) or the four parameters A, eta, I, phi connected with the rotating nucleus of the comet. To link successfully all the observations, we had to assume for both comets that A(t) = A(sub O)exp(-B x t) where B was an additional nongravitational parameter.
NASA Astrophysics Data System (ADS)
Baker, Ben; Stachnik, Joshua; Rozhkov, Mikhail
2017-04-01
International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event according to the protocol to the Protocol to the Comprehensive Nuclear Test Ban Treaty. Determination of seismic event source mechanism and its depth is closely related to these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. In this presentation we demonstrate preliminary results obtained with the latter approach from an improved software design. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Posterior distributions of moment tensor parameters show narrow peaks where a significant number of reliable surface wave observations are available. For earthquake examples, fault orientation (strike, dip, and rake) posterior distributions also provide results consistent with published catalogues. Inclusion of observations on horizontal components will provide further constraints. In addition, the calculation of teleseismic P wave Green's Functions are improved through prior analysis to determine an appropriate attenuation parameter for each source-receiver path. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK events and shallow earthquakes using a new implementation of teleseismic P waves waveform fitting. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.
NASA Astrophysics Data System (ADS)
Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.
2012-07-01
In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.
Application of ray-traced tropospheric slant delays to geodetic VLBI analysis
NASA Astrophysics Data System (ADS)
Hofmeister, Armin; Böhm, Johannes
2017-08-01
The correction of tropospheric influences via so-called path delays is critical for the analysis of observations from space geodetic techniques like the very long baseline interferometry (VLBI). In standard VLBI analysis, the a priori slant path delays are determined using the concept of zenith delays, mapping functions and gradients. The a priori use of ray-traced delays, i.e., tropospheric slant path delays determined with the technique of ray-tracing through the meteorological data of numerical weather models (NWM), serves as an alternative way of correcting the influences of the troposphere on the VLBI observations within the analysis. In the presented research, the application of ray-traced delays to the VLBI analysis of sessions in a time span of 16.5 years is investigated. Ray-traced delays have been determined with program RADIATE (see Hofmeister in Ph.D. thesis, Department of Geodesy and Geophysics, Faculty of Mathematics and Geoinformation, Technische Universität Wien. http://resolver.obvsg.at/urn:nbn:at:at-ubtuw:1-3444, 2016) utilizing meteorological data provided by NWM of the European Centre for Medium-Range Weather Forecasts (ECMWF). In comparison with a standard VLBI analysis, which includes the tropospheric gradient estimation, the application of the ray-traced delays to an analysis, which uses the same parameterization except for the a priori slant path delay handling and the used wet mapping factors for the zenith wet delay (ZWD) estimation, improves the baseline length repeatability (BLR) at 55.9% of the baselines at sub-mm level. If no tropospheric gradients are estimated within the compared analyses, 90.6% of all baselines benefit from the application of the ray-traced delays, which leads to an average improvement of the BLR of 1 mm. The effects of the ray-traced delays on the terrestrial reference frame are also investigated. A separate assessment of the RADIATE ray-traced delays is carried out by comparison to the ray-traced delays from the National Aeronautics and Space Administration Goddard Space Flight Center (NASA GSFC) (Eriksson and MacMillan in http://lacerta.gsfc.nasa.gov/tropodelays, 2016) with respect to the analysis performances in terms of BLR results. If tropospheric gradient estimation is included in the analysis, 51.3% of the baselines benefit from the RADIATE ray-traced delays at sub-mm difference level. If no tropospheric gradients are estimated within the analysis, the RADIATE ray-traced delays deliver a better BLR at 63% of the baselines compared to the NASA GSFC ray-traced delays.
Some New Mathematical Methods for Variational Objective Analysis
NASA Technical Reports Server (NTRS)
Wahba, G.; Johnson, D. R.
1984-01-01
New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.
An FP7 "Space" project: Aphorism "Advanced PRocedures for volcanic and Seismic Monitoring"
NASA Astrophysics Data System (ADS)
Di Iorio, A., Sr.; Stramondo, S.; Bignami, C.; Corradini, S.; Merucci, L.
2014-12-01
APHORISM project proposes the development and testing of two new methods to combine Earth Observation satellite data from different sensors, and ground data. The aim is to demonstrate that this two types of data, appropriately managed and integrated, can provide new improved GMES products useful for seismic and volcanic crisis management. The first method, APE - A Priori information for Earthquake damage mapping, concerns the generation of maps to address the detection and estimate of damage caused by a seism. The use of satellite data to investigate earthquake damages is not an innovative issue. We can find a wide literature and projects concerning such issue, but usually the approach is only based on change detection techniques and classifications algorithms. The novelty of APE relies on the exploitation of a priori information derived by InSAR time series to measure surface movements, shake maps obtained from seismological data, and vulnerability information. This a priori information is then integrated with change detection map to improve accuracy and to limit false alarms. The second method deals with volcanic crisis management. The method, MACE - Multi-platform volcanic Ash Cloud Estimation, concerns the exploitation of GEO (Geosynchronous Earth Orbit) sensor platform, LEO (Low Earth Orbit) satellite sensors and ground measures to improve the ash detection and retrieval and to characterize the volcanic ash clouds. The basic idea of MACE consists of an improvement of volcanic ash retrievals at the space-time scale by using both the LEO and GEO estimations and in-situ data. Indeed the standard ash thermal infrared retrieval is integrated with data coming from a wider spectral range from visible to microwave. The ash detection is also extended in case of cloudy atmosphere or steam plumes. APE and MACE methods have been defined in order to provide products oriented toward the next ESA Sentinels satellite missions.The project is funded under the European Union FP7 program and the Kick-Off meeting has been held at INGV premises in Rome on 18th December 2013.
Data-Rate Estimation for Autonomous Receiver Operation
NASA Technical Reports Server (NTRS)
Tkacenko, A.; Simon, M. K.
2005-01-01
In this article, we present a series of algorithms for estimating the data rate of a signal whose admissible data rates are integer base, integer powered multiples of a known basic data rate. These algorithms can be applied to the Electra radio currently used in the Deep Space Network (DSN), which employs data rates having the above relationship. The estimation is carried out in an autonomous setting in which very little a priori information is assumed. It is done by exploiting an elegant property of the split symbol moments estimator (SSME), which is traditionally used to estimate the signal-to-noise ratio (SNR) of the received signal. By quantizing the assumed symbol-timing error or jitter, we present an all-digital implementation of the SSME which can be used to jointly estimate the data rate, SNR, and jitter. Simulation results presented show that these joint estimation algorithms perform well, even in the low SNR regions typically encountered in the DSN.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
New Opportunities for Remote Sensing Ionospheric Irregularities by Fitting Scintillation Spectra
NASA Astrophysics Data System (ADS)
Carrano, C. S.; Rino, C. L.; Groves, K. M.
2017-12-01
In a recent paper, we presented a phase screen theory for the spectrum of intensity scintillations when the refractive index irregularities follow a two-component power law [Carrano and Rino, DOI: 10.1002/2015RS005903]. More recently we have investigated the inverse problem, whereby phase screen parameters are inferred from scintillation time series. This is accomplished by fitting the spectrum of intensity fluctuations with a parametrized theoretical model using Maximum Likelihood (ML) methods. The Markov-Chain Monte-Carlo technique provides a-posteriori errors and confidence intervals. The Akaike Information Criterion (AIC) provides justification for the use of one- or two-component irregularity models. We refer to this fitting as Irregularity Parameter Estimation (IPE) since it provides a statistical description of the irregularities from the scintillations they produce. In this talk, we explore some new opportunities for remote sensing ionospheric irregularities afforded by IPE. Statistical characterization of irregularities and the plasma bubbles in which they are embedded provides insight into the development of the underlying instability. In a companion paper by Rino et al., IPE is used to interpret scintillation due to simulated EPB structure. IPE can be used to reconcile multi-frequency scintillation observations and to construct high fidelity scintillation simulation tools. In space-to-ground propagation scenarios, for which an estimate of the distance to the scattering region is available a-priori, IPE enables retrieval of zonal irregularity drift. In radio occultation scenarios, the distance to the irregularities is generally unknown but IPE enables retrieval of Fresnel frequency. A geometric model for the effective scan velocity maps Fresnel frequency to Fresnel scale, yielding the distance to the irregularities. We demonstrate this approach by geolocating irregularities observed by the CORISS instrument onboard the C/NOFS satellite.
A Graphical User Interface for a Method to Infer Kinetics and Network Architecture (MIKANA)
Mourão, Márcio A.; Srividhya, Jeyaraman; McSharry, Patrick E.; Crampin, Edmund J.; Schnell, Santiago
2011-01-01
One of the main challenges in the biomedical sciences is the determination of reaction mechanisms that constitute a biochemical pathway. During the last decades, advances have been made in building complex diagrams showing the static interactions of proteins. The challenge for systems biologists is to build realistic models of the dynamical behavior of reactants, intermediates and products. For this purpose, several methods have been recently proposed to deduce the reaction mechanisms or to estimate the kinetic parameters of the elementary reactions that constitute the pathway. One such method is MIKANA: Method to Infer Kinetics And Network Architecture. MIKANA is a computational method to infer both reaction mechanisms and estimate the kinetic parameters of biochemical pathways from time course data. To make it available to the scientific community, we developed a Graphical User Interface (GUI) for MIKANA. Among other features, the GUI validates and processes an input time course data, displays the inferred reactions, generates the differential equations for the chemical species in the pathway and plots the prediction curves on top of the input time course data. We also added a new feature to MIKANA that allows the user to exclude a priori known reactions from the inferred mechanism. This addition improves the performance of the method. In this article, we illustrate the GUI for MIKANA with three examples: an irreversible Michaelis–Menten reaction mechanism; the interaction map of chemical species of the muscle glycolytic pathway; and the glycolytic pathway of Lactococcus lactis. We also describe the code and methods in sufficient detail to allow researchers to further develop the code or reproduce the experiments described. The code for MIKANA is open source, free for academic and non-academic use and is available for download (Information S1). PMID:22096591
A graphical user interface for a method to infer kinetics and network architecture (MIKANA).
Mourão, Márcio A; Srividhya, Jeyaraman; McSharry, Patrick E; Crampin, Edmund J; Schnell, Santiago
2011-01-01
One of the main challenges in the biomedical sciences is the determination of reaction mechanisms that constitute a biochemical pathway. During the last decades, advances have been made in building complex diagrams showing the static interactions of proteins. The challenge for systems biologists is to build realistic models of the dynamical behavior of reactants, intermediates and products. For this purpose, several methods have been recently proposed to deduce the reaction mechanisms or to estimate the kinetic parameters of the elementary reactions that constitute the pathway. One such method is MIKANA: Method to Infer Kinetics And Network Architecture. MIKANA is a computational method to infer both reaction mechanisms and estimate the kinetic parameters of biochemical pathways from time course data. To make it available to the scientific community, we developed a Graphical User Interface (GUI) for MIKANA. Among other features, the GUI validates and processes an input time course data, displays the inferred reactions, generates the differential equations for the chemical species in the pathway and plots the prediction curves on top of the input time course data. We also added a new feature to MIKANA that allows the user to exclude a priori known reactions from the inferred mechanism. This addition improves the performance of the method. In this article, we illustrate the GUI for MIKANA with three examples: an irreversible Michaelis-Menten reaction mechanism; the interaction map of chemical species of the muscle glycolytic pathway; and the glycolytic pathway of Lactococcus lactis. We also describe the code and methods in sufficient detail to allow researchers to further develop the code or reproduce the experiments described. The code for MIKANA is open source, free for academic and non-academic use and is available for download (Information S1).
Determining paediatric patient thickness from a single digital radiograph-a proof of principle.
Worrall, Mark; Vinnicombe, Sarah; Sutton, David G
2018-04-05
This work presents a proof of principle for a method of estimating the thickness of an attenuator from a single radiograph using the image, the exposure factors with which it was acquired and a priori knowledge of the characteristics of the X-ray unit and detector used for the exposure. It is intended this could be developed into a clinical tool to assist with paediatric patient dose audit, for which a measurement of patient size is required. The proof of principle used measured pixel value and effective linear attenuation coefficient to estimate the thickness of a Solid Water attenuator. The kerma at the detector was estimated using a measurement of pixel value on the image and measured detector calibrations. The initial kerma was estimated using a lookup table of measured output values. The effective linear attenuation coefficient was measured for Solid Water at varying kV p . 11 test images of known and varying thicknesses of Solid Water were acquired at 60, 70 and 81 kV p . Estimates of attenuator thickness were made using the model and the results compared to the known thickness. Estimates of attenuator thickness made using the model differed from the known thickness by 3.8 mm (3.2%) on average, with a range of 0.5-10.8 mm (0.5-9%). A proof of principle is presented for a method of estimating the thickness of an attenuator using a single radiograph of the attenuator. The method has been shown to be accurate using a Solid Water attenuator, with a maximum difference between estimated and known attenuator thickness of 10.8 mm (9%). The method shows promise as a clinical tool for estimating abdominal paediatric patient thickness for paediatric patient dose audit, and is only contingent on the type of data routinely collected by Medical Physics departments. Advances in knowledge: A computational model has been created that is capable of accurately estimating the thickness of a uniform attenuator using only the radiographic image, the exposure factors with which it was acquired and a priori knowledge of the characteristics of the X-ray unit and detector used for the exposure.
NASA Astrophysics Data System (ADS)
Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.
2016-11-01
There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing
2018-01-01
Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown. PMID:29673205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallick, S.
1999-03-01
In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less
Wear and breakage monitoring of cutting tools by an optical method: theory
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Yongqing; Chen, Fangrong; Tian, Zhiren; Wang, Yao
1996-10-01
An essential part of a machining system in the unmanned flexible manufacturing system, is the ability to automatically change out tools that are worn or damaged. An optoelectronic method for in situ monitoring of the flank wear and breakage of cutting tools is presented. A flank wear estimation system is implemented in a laboratory environment, and its performance is evaluated through turning experiments. The flank wear model parameters that need to be known a priori are determined through several preliminary experiments, or from data available in the literature. The resulting cutting conditions are typical of those used in finishing cutting operations. Through time and amplitude domain analysis of the cutting tool wear states and breakage states, it is found that the original signal digital specificity (sigma) 2x and the self correlation coefficient (rho) (m) can reflect the change regularity of the cutting tool wear and break are determined, but which is not enough due to the complexity of the wear and break procedure of cutting tools. Time series analysis and frequency spectrum analysis will be carried out, which will be described in the later papers.
Cosmography of f(R)-brane cosmology
NASA Astrophysics Data System (ADS)
Bouhmadi-López, Mariam; Capozziello, Salvatore; Cardone, Vincenzo F.
2010-11-01
Cosmography is a useful tool to constrain cosmological models, in particular, dark energy models. In the case of modified theories of gravity, where the equations of motion are generally quite complicated, cosmography can contribute to select realistic models without imposing arbitrary choices a priori. Indeed, its reliability is based on the assumptions that the universe is homogeneous and isotropic on large scale and luminosity distance can be “tracked” by the derivative series of the scale factor a(t). We apply this approach to induced gravity brane-world models where an f(R) term is present in the brane effective action. The virtue of the model is to self-accelerate the normal and healthy Dvali-Gabadadze-Porrati branch once the f(R) term deviates from the Hilbert-Einstein action. We show that the model, coming from a fundamental theory, is consistent with the ΛCDM scenario at low redshift. We finally estimate the cosmographic parameters fitting the Union2 Type Ia Supernovae data set and the distance priors from baryon acoustic oscillations and then provide constraints on the present day values of f(R) and its second and third derivatives.
A geometric approach to identify cavities in particle systems
NASA Astrophysics Data System (ADS)
Voyiatzis, Evangelos; Böhm, Michael C.; Müller-Plathe, Florian
2015-11-01
The implementation of a geometric algorithm to identify cavities in particle systems in an open-source python program is presented. The algorithm makes use of the Delaunay space tessellation. The present python software is based on platform-independent tools, leading to a portable program. Its successful execution provides information concerning the accessible volume fraction of the system, the size and shape of the cavities and the group of atoms forming each of them. The program can be easily incorporated into the LAMMPS software. An advantage of the present algorithm is that no a priori assumption on the cavity shape has to be made. As an example, the cavity size and shape distributions in a polyethylene melt system are presented for three spherical probe particles. This paper serves also as an introductory manual to the script. It summarizes the algorithm, its implementation, the required user-defined parameters as well as the format of the input and output files. Additionally, we demonstrate possible applications of our approach and compare its capability with the ones of well documented cavity size estimators.
NASA Astrophysics Data System (ADS)
Wen, Huanyao; Zhu, Limei
2018-02-01
In this paper, we consider the Cauchy problem for a two-phase model with magnetic field in three dimensions. The global existence and uniqueness of strong solution as well as the time decay estimates in H2 (R3) are obtained by introducing a new linearized system with respect to (nγ -n˜γ , n - n ˜ , P - P ˜ , u , H) for constants n ˜ ≥ 0 and P ˜ > 0, and doing some new a priori estimates in Sobolev Spaces to get the uniform upper bound of (n - n ˜ ,nγ -n˜γ) in H2 (R3) norm.
NASA Astrophysics Data System (ADS)
Koenig, Daniel
2018-02-01
Applying a one-step integrated process, i.e. by simultaneously processing all data and determining all satellite orbits involved, a Terrestrial Reference Frame (TRF) consisting of a geometric as well as a dynamic part has been determined at the observation level using the EPOS-OC software of Deutsches GeoForschungsZentrum. The satellite systems involved comprise the Global Positioning System (GPS) as well as the twin GRACE spacecrafts. Applying a novel approach, the inherent datum defect has been overcome empirically. In order not to rely on theoretical assumptions this is done by carrying out the TRF estimation based on simulated observations and using the associated satellite orbits as background truth. The datum defect is identified here as the total of all three translations as well as the rotation about the z-axis of the ground station network leading to a rank-deficient estimation problem. To rectify this singularity, datum constraints comprising no-net translation (NNT) conditions in x, y, and z as well as a no-net rotation (NNR) condition about the z-axis are imposed. Thus minimally constrained, the TRF solution covers a time span of roughly a year with daily resolution. For the geometric part the focus is put on Helmert transformations between the a priori and the estimated sets of ground station positions, and the dynamic part is represented by gravity field coefficients of degree one and two. The results of a reference solution reveal the TRF parameters to be estimated reliably with high precision. Moreover, carrying out a comparable two-step approach using the same data and models leads to parameters and observational residuals of worse quality. A validation w.r.t. external sources shows the dynamic origin to coincide at a level of 5 mm or better in x and y, and mostly better than 15 mm in z. Comparing the derived GPS orbits to IGS final orbits as well as analysing the SLR residuals for the GRACE satellites reveals an orbit quality on the few cm level. Additional TRF test solutions demonstrate that K-Band Range-Rate observations between both GRACE spacecrafts are crucial for accurately estimating the dynamic frame's orientation, and reveal the importance of the NNT- and NNR-conditions imposed for estimating the components of the dynamic geocenter.
Weighted Lq-estimates for stationary Stokes system with partially BMO coefficients
NASA Astrophysics Data System (ADS)
Dong, Hongjie; Kim, Doyoon
2018-04-01
We prove the unique solvability of solutions in Sobolev spaces to the stationary Stokes system on a bounded Reifenberg flat domain when the coefficients are partially BMO functions, i.e., locally they are merely measurable in one direction and have small mean oscillations in the other directions. Using this result, we establish the unique solvability in Muckenhoupt type weighted Sobolev spaces for the system with partially BMO coefficients on a Reifenberg flat domain. We also present weighted a priori Lq-estimates for the system when the domain is the whole Euclidean space or a half space.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
Exponential Modelling for Mutual-Cohering of Subband Radar Data
NASA Astrophysics Data System (ADS)
Siart, U.; Tejero, S.; Detlefsen, J.
2005-05-01
Increasing resolution and accuracy is an important issue in almost any type of radar sensor application. However, both resolution and accuracy are strongly related to the available signal bandwidth and energy that can be used. Nowadays, often several sensors operating in different frequency bands become available on a sensor platform. It is an attractive goal to use the potential of advanced signal modelling and optimization procedures by making proper use of information stemming from different frequency bands at the RF signal level. An important prerequisite for optimal use of signal energy is coherence between all contributing sensors. Coherent multi-sensor platforms are greatly expensive and are thus not available in general. This paper presents an approach for accurately estimating object radar responses using subband measurements at different RF frequencies. An exponential model approach allows to compensate for the lack of mutual coherence between independently operating sensors. Mutual coherence is recovered from the a-priori information that both sensors have common scattering centers in view. Minimizing the total squared deviation between measured data and a full-range exponential signal model leads to more accurate pole angles and pole magnitudes compared to single-band optimization. The model parameters (range and magnitude of point scatterers) after this full-range optimization process are also more accurate than the parameters obtained from a commonly used super-resolution procedure (root-MUSIC) applied to the non-coherent subband data.
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan
2015-01-01
High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694
Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A; Vincent, Jason A; Dell, Katherine M; Drumm, Mitchell L; Brady-Kalnay, Susann M; Griswold, Mark A; Flask, Chris A; Lu, Lan
2015-03-01
High-field preclinical MRI scanners are now commonly used to quantitatively assess disease status and the efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical 7.0-T MRI implementation of the highly novel MR fingerprinting (MRF) methodology which has been described previously for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a fast imaging with steady-state free precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 min. This initial high-field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for the quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo
2018-05-01
One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.
NASA Astrophysics Data System (ADS)
Wu, X.; Heflin, M. B.; Schotman, H.; Vermeersen, B. L.; Dong, D.; Gross, R. S.; Ivins, E. R.; Moore, A. W.; Owen, S. E.
2009-12-01
Separating geodetic signatures of present-day surface mass trend and Glacial Isostatic Adjustment (GIA) requires multi-data types of different physical characteristics. We take a kinematic approach to the global simultaneous estimation problem. Three sets of global spherical harmonic coefficients from degree 1 to 60 of the present-day surface mass trend, vertical and horizontal GIA induced surface velocity fields, as well as rotation vectors of 15 major tectonic plates are solved for. The estimation is carried out using GRACE geoid trend, 3-dimensional velocities measured at 664 SLR/VLBI/GPS sites, the data-assimilated JPL ECCO ocean model. The ICE-5G/IJ05 (VM2) predictions are used as a priori GIA mean model. An a priori covariance matrix is constructed in the spherical harmonic domain for the GIA model by propagating the covariance matrices of random and geographically correlated ice thickness errors and upper/lower mantle viscosity errors so that the resulting magnitude and geographic pattern of the geoid uncertainties roughly reflect the difference between two recent GIA models. Unprecedented high-precision results are achieved. For example, geocenter velocities due to present-day surface mass trend and due to GIA are both determined to uncertainties of better than 0.1 mm/yr without using direct geodetic geocenter information. Information content of the data sets, future improvements, and benefits from new data will also be explored in the global inverse framework.
Estimating Small-Body Gravity Field from Shape Model and Navigation Data
NASA Technical Reports Server (NTRS)
Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam
2008-01-01
This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.
A-Priori Rupture Models for Northern California Type-A Faults
Wills, Chris J.; Weldon, Ray J.; Field, Edward H.
2008-01-01
This appendix describes how a-priori rupture models were developed for the northern California Type-A faults. As described in the main body of this report, and in Appendix G, ?a-priori? models represent an initial estimate of the rate of single and multi-segment surface ruptures on each fault. Whether or not a given model is moment balanced (i.e., satisfies section slip-rate data) depends on assumptions made regarding the average slip on each segment in each rupture (which in turn depends on the chosen magnitude-area relationship). Therefore, for a given set of assumptions, or branch on the logic tree, the methodology of the present Working Group (WGCEP-2007) is to find a final model that is as close as possible to the a-priori model, in the least squares sense, but that also satisfies slip rate and perhaps other data. This is analogous the WGCEP- 2002 approach of effectively voting on the relative rate of each possible rupture, and then finding the closest moment-balance model (under a more limiting set of assumptions than adopted by the present WGCEP, as described in detail in Appendix G). The 2002 Working Group Report (WCCEP, 2003, referred to here as WGCEP-2002), created segmented earthquake rupture forecast models for all faults in the region, including some that had been designated as Type B faults in the NSHMP, 1996, and one that had not previously been considered. The 2002 National Seismic Hazard Maps used the values from WGCEP-2002 for all the faults in the region, essentially treating all the listed faults as Type A faults. As discussed in Appendix A, the current WGCEP found that there are a number of faults with little or no data on slip-per-event, or dates of previous earthquakes. As a result, the WGCEP recommends that faults with minimal available earthquake recurrence data: the Greenville, Mount Diablo, San Gregorio, Monte Vista-Shannon and Concord-Green Valley be modeled as Type B faults to be consistent with similarly poorly-known faults statewide. As a result, the modified segmented models discussed here only concern the San Andreas, Hayward-Rodgers Creek, and Calaveras faults. Given the extensive level of effort given by the recent Bay-Area WGCEP-2002, our approach has been to adopt their final average models as our preferred a-prior models. We have modified the WGCEP-2002 models where necessary to match data that were not available or not used by that WGCEP and where the models needed by WGCEP-2007 for a uniform statewide model require different assumptions and/or logic-tree branch weights. In these cases we have made what are usually slight modifications to the WGCEP-2002 model. This Appendix presents the minor changes needed to accomodate updated information and model construction. We do not attempt to reproduce here the extensive documentation of data, model parameters and earthquake probablilities in the WG-2002 report.
Yuen, Nicholas; O'Shaughnessy, Pauline; Thomson, Andrew
2017-12-01
Indications for endoscopic retrograde cholangiopancreatography (ERCP) have received little attention, especially in scientific or objective terms. To review the prevailing ERCP indications in the literature, and to propose and evaluate a new ERCP indication system, which relies on more objective pre-procedure parameters. An analysis was conducted on 1758 consecutive ERCP procedures, in which contemporaneous use was made of an a-priori indication system. Indications were based on the objective pre-procedure parameters and divided into primary [cholangitis, clinical evidence of biliary leak, acute (biliary) pancreatitis, abnormal intraoperative cholangiogram (IOC), or change/removal of stent for benign/malignant disease] and secondary [combination of two or three of: pain attributable to biliary disease ('P'), imaging evidence of biliary disease ('I'), and abnormal liver function tests (LFTs) ('L')]. A secondary indication was only used if a primary indication was not present. The relationship between this newly developed classification system and ERCP findings and adverse events was examined. The indications of cholangitis and positive IOC were predictive of choledocholithiasis at ERCP (101/154 and 74/141 procedures, respectively). With respect to secondary indications, only if all three of 'P', 'I', and 'L' were present there was a statistically significant association with choledocholithiasis (χ 2 (1) = 35.3, p < .001). Adverse events were associated with an unusual indication leading to greater risk of unplanned hospitalization (χ 2 (1) = 17.0, p < .001). An a-priori-based indication system for ERCP, which relies on pre-ERCP objective parameters, provides a more useful and scientific classification system than is available currently.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
BELM: Bayesian extreme learning machine.
Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J
2011-03-01
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.
CLAES Product Improvement by use of GSFC Data Assimilation System
NASA Technical Reports Server (NTRS)
Kumer, J. B.; Douglass, Anne (Technical Monitor)
2001-01-01
Recent development in chemistry transport models (CTM) and in data assimilation systems (DAS) indicate impressive predictive capability for the movement of airparcels and the chemistry that goes on within these. This project was aimed at exploring the use of this capability to achieve improved retrieval of geophysical parameters from remote sensing data. The specific goal was to improve retrieval of the CLAES CH4 data obtained during the active north high latitude dynamics event of 18 to 25 February 1992. The model capabilities would be used: (1) rather than climatology to improve on the first guess and the a-priori fields, and (2) to provide horizontal gradients to include in the retrieval forward model. The retrieval would be implemented with the first forward DAS prediction. The results would feed back to the DAS and a second DAS prediction for first guess, a-priori and gradients would feed to the retrieval. The process would repeat to convergence and then proceed to the next day.
Systematic modelling and design evaluation of unperturbed tumour dynamics in xenografts.
Parra Guillen, Zinnia P Patricia; Mangas Sanjuan, Victor; Garcia-Cremades, Maria; Troconiz, Inaki F; Mo, Gary; Pitou, Celine; Iversen, Philip W; Wallin, Johan E
2018-04-24
Xenograft mice are largely used to evaluate the efficacy of oncological drugs during preclinical phases of drug discovery and development. Mathematical models provide a useful tool to quantitatively characterise tumour growth dynamics and also optimise upcoming experiments. To the best of our knowledge, this is the first report where unperturbed growth of a large set of tumour cell lines (n=28) has been systematically analysed using the model proposed by Simeoni in the context of non-linear mixed effect (NLME). Exponential growth was identified as the governing mechanism in the majority of the cell lines, with constant rate values ranging from 0.0204 to 0.203 day -1 No common patterns could be observed across tumour types, highlighting the importance of combining information from different cell lines when evaluating drug activity. Overall, typical model parameters were precisely estimated using designs where tumour size measurements were taken every two days. Moreover, reducing the number of measurement to twice per week, or even once per week for cell lines with low growth rates, showed little impact on parameter precision. However, in order to accurately characterise parameter variability (i.e. relative standard errors below 50%), a sample size of at least 50 mice is needed. This work illustrates the feasibility to systematically apply NLME models to characterise tumour growth in drug discovery and development, and constitutes a valuable source of data to optimise experimental designs by providing an a priori sampling window and minimising the number of samples required. The American Society for Pharmacology and Experimental Therapeutics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, William A., E-mail: wadawson@ucdavis.edu
2013-08-01
Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less
DAISY: a new software tool to test global identifiability of biological and physiological systems
Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D’Angiò, Leontina
2009-01-01
A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/. PMID:17707944
Aragón-Noriega, Eugenio Alberto
2013-09-01
Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.
A New Global Geodetic Strain Rate Model
NASA Astrophysics Data System (ADS)
Kreemer, C. W.; Klein, E. C.; Blewitt, G.; Shen, Z.; Wang, M.; Chamot-Rooke, N. R.; Rabaute, A.
2012-12-01
As part of the Global Earthquake Model (GEM) effort to improve global seismic hazard models, we present a new global geodetic strain rate model. This model (GSRM v. 2) is a vast improvement on the previous model from 2004 (v. 1.2). The model is still based on a finite-element type approach and has deforming cells in between the assumed rigid plates. While v.1.2 contained ~25,000 deforming cells of 0.6° by 0.5° dimension, the new models contains >136,000 cells of 0.25° by 0.2° dimension. We redefined the geometries of the deforming zones based on the definitions of Bird (2003) and Chamot-Rooke and Rabaute (2006). We made some adjustments to the grid geometry at places where seismicity and/or GPS velocities suggested the presence of deforming areas where those previous studies did not. As a result, some plates/blocks identified by Bird (2003) we assumed to deform, and the total number of plates and blocks in GSRM v.2 is 38 (including the Bering block, which Bird (2003) did not consider). GSRM v.1.2 was based on ~5,200 GPS velocities, taken from 86 studies. The new model is based on ~17,000 GPS velocities, taken from 170 studies. The GPS velocity field consists of a 1) ~4900 velocities derived by us for CPS stations publicly available RINEX data and >3.5 years of data, 2) ~1200 velocities for China from a new analysis of all CMONOC data, and 3) velocities published in the literature or made otherwise available to us. All studies were combined into the same reference frame by a 6-parameter transformation using velocities at collocated stations. Because the goal of the project is to model the interseismic strain rate field, we model co-seismic jumps while estimating velocities, ignore periods of post-seismic deformation, and exclude time-series that reflect magmatic and anthropogenic activity. GPS velocities were used to estimate angular velocities for most of the 38 rigid plates and blocks (the rest being taken from the literature), and these were used as boundary conditions for the strain rate calculations. For the strain rate calculations we used the method of Haines and Holt. In order to equally fit the data in slowly and rapidly deforming areas, we first calculated a very smooth model by setting the a priori variances of the strain rate components very low. We then used this model as a proxy for the a priori standard deviations of the final model. To add some more constraints to the model (to make it more stable), we manipulated the a priori covariance matrix to reflect the expected style of deformation derived from (an interpolation of) shallow earthquake focal mechanisms. We will show examples of the strain rate and velocity field results. We will also highlight how and where the results can be viewed and accessed through a dedicated webportal.
Distributed Compression in Camera Sensor Networks
2006-02-13
complicated in this context. This effort will make use of the correlation structure of the data given by the plenoptic function n the case of multi-camera...systems. In many cases the structure of the plenoptic function can be estimated without requiring inter-sensor communications, but by using some a...priori global geometrical information. Once the structure of the plenoptic function has been predicted, it is possible to develop specific distributed
Filtering, Coding, and Compression with Malvar Wavelets
1993-12-01
speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates
NASA Astrophysics Data System (ADS)
Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward
2016-01-01
We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.
Heart Rate Detection Using Microsoft Kinect: Validation and Comparison to Wearable Devices.
Gambi, Ennio; Agostinelli, Angela; Belli, Alberto; Burattini, Laura; Cippitelli, Enea; Fioretti, Sandro; Pierleoni, Paola; Ricciuti, Manola; Sbrollini, Agnese; Spinsante, Susanna
2017-08-02
Contactless detection is one of the new frontiers of technological innovation in the field of healthcare, enabling unobtrusive measurements of biomedical parameters. Compared to conventional methods for Heart Rate (HR) detection that employ expensive and/or uncomfortable devices, such as the Electrocardiograph (ECG) or pulse oximeter, contactless HR detection offers fast and continuous monitoring of heart activities and provides support for clinical analysis without the need for the user to wear a device. This paper presents a validation study for a contactless HR estimation method exploiting RGB (Red, Green, Blue) data from a Microsoft Kinect v2 device. This method, based on Eulerian Video Magnification (EVM), Photoplethysmography (PPG) and Videoplethysmography (VPG), can achieve performance comparable to classical approaches exploiting wearable systems, under specific test conditions. The output given by a Holter, which represents the gold-standard device used in the test for ECG extraction, is considered as the ground-truth, while a comparison with a commercial smartwatch is also included. The validation process is conducted with two modalities that differ for the availability of a priori knowledge about the subjects' normal HR. The two test modalities provide different results. In particular, the HR estimation differs from the ground-truth by 2% when the knowledge about the subject's lifestyle and his/her HR is considered and by 3.4% if no information about the person is taken into account.
Heart Rate Detection Using Microsoft Kinect: Validation and Comparison to Wearable Devices
Agostinelli, Angela; Belli, Alberto; Cippitelli, Enea; Fioretti, Sandro; Pierleoni, Paola; Ricciuti, Manola
2017-01-01
Contactless detection is one of the new frontiers of technological innovation in the field of healthcare, enabling unobtrusive measurements of biomedical parameters. Compared to conventional methods for Heart Rate (HR) detection that employ expensive and/or uncomfortable devices, such as the Electrocardiograph (ECG) or pulse oximeter, contactless HR detection offers fast and continuous monitoring of heart activities and provides support for clinical analysis without the need for the user to wear a device. This paper presents a validation study for a contactless HR estimation method exploiting RGB (Red, Green, Blue) data from a Microsoft Kinect v2 device. This method, based on Eulerian Video Magnification (EVM), Photoplethysmography (PPG) and Videoplethysmography (VPG), can achieve performance comparable to classical approaches exploiting wearable systems, under specific test conditions. The output given by a Holter, which represents the gold-standard device used in the test for ECG extraction, is considered as the ground-truth, while a comparison with a commercial smartwatch is also included. The validation process is conducted with two modalities that differ for the availability of a priori knowledge about the subjects’ normal HR. The two test modalities provide different results. In particular, the HR estimation differs from the ground-truth by 2% when the knowledge about the subject’s lifestyle and his/her HR is considered and by 3.4% if no information about the person is taken into account. PMID:28767091
GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data
NASA Technical Reports Server (NTRS)
VanHelleputte, Tom; Visser, Pieter
2007-01-01
Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.
Anomalous tidal loading signals in South-West England and Brittany
NASA Astrophysics Data System (ADS)
Keshin, M.; Penna, N. T.; Clarke, P. J.; Bos, M. S.; Baker, T. F.
2010-05-01
The tidal deformation of the Earth, including ocean tide loading (OTL), sheds light on the Earth's internal structure. Uncertainties in the knowledge of this deformation may be a source of both direct and propagated periodic errors in GPS geodesy. The increasing number of global GPS stations with long histories of observations, as well as recent developments in precise GPS geodesy such as the availability of reprocessed satellite orbits, enables further study of these geophysical and geodetic phenomena. There are more than 10 worldwide regions where OTL displacement amplitudes exceed 25mm. In our work we considered one such region covering South-West England and stretching southward along the coasts of France, Spain and Portugal. Estimates of three-dimensional harmonic site motion at each of the principal diurnal (K1, O1, P1, Q1) and semi-diurnal (K2, M2, N2, S2) frequencies were obtained for 40 European stations with at least 2 year observation span, using the GIPSY-OASIS II software package with reprocessed precise satellite orbits from JPL. All GPS data available from 2002.0 to 2010.0 were considered. 34 stations were situated close to the Atlantic coast; a further 6 inland stations at similar latitudes were processed as a check on solid Earth tide models. Inter-model OTL displacement differences are small, especially for the inland sites; the problematic Bristol Channel area of South-West England was excluded. We validated the quality of our GPS estimates by using and comparing three different analysis strategies: (1) Harmonic estimation of total tidal displacement in 24-hour Precise Point Positioning (PPP) batch solutions: harmonic displacements are estimated per coordinate component for each of the eight principal tidal constituents. OTL is not modelled a priori, and nodal corrections are applied in post-processing after combination of the daily results; (2) Harmonic estimation of residual tidal displacement in 24-hour PPP batch solutions: OTL is modelled a priori using the FES2004 model in the reference frame of the whole Earth system (CM); the residual harmonic displacements are estimated per component per principal tidal constituent. Minor tidal harmonics are removed a priori using the routine "hardisp" by D. Agnew. Because of this, post-processing nodal corrections are not applied; (3) Amplitude and phase from kinematic PPP processing: kinematic GPS processing with a priori OTL modelling using FES2004 and hardisp as in (2); amplitude spectra are later estimated from the entire coordinate time series using the Lomb-Scargle periodogram method. We typically obtain excellent (0.3-0.7mm except for the K1 and K2 constituents) phasor agreement between all three strategies, comparable to the inter-model agreement between computed OTL displacements and suggesting that the GPS analysis strategy robustly detects actual tidal displacements. For sites in inland Europe where computed OTL displacements are less than 10mm with inter-model differences of less than 0.2mm, residual harmonic amplitudes are also at the 0.3-0.7mm level, confirming that solid Earth tides are modelled to at least this accuracy. For GPS stations located in South-West England and Brittany, onshore of the continental shelf, anomalous residual tidal signals were detected of about 2-3mm magnitude for the vertical M2 OTL constituent (10% of the expected signal). In contrast, sites in the Iberian Peninsula, with similar expected OTL magnitudes, have residuals at the expected 0.3-0.7mm level. Sites near to the Bay of Biscay show transitional behaviour between these regimes. Therefore at these locations, the different modern ocean tide models that agree very well must all either be systematically in error, or the difference in behaviour may be caused by errors in the displacement Green's functions applicable to loads on the nearby continental shelf.
An adaptive tracking observer for failure-detection systems
NASA Technical Reports Server (NTRS)
Sidar, M.
1982-01-01
The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.
A Priori Analyses of Three Subgrid-Scale Models for One-Parameter Families of Filters
NASA Technical Reports Server (NTRS)
Pruett, C. David; Adams, Nikolaus A.
1998-01-01
The decay of isotropic turbulence a compressible flow is examined by direct numerical simulation (DNS). A priori analyses of the DNS data are then performed to evaluate three subgrid-scale (SGS) models for large-eddy simulation (LES): a generalized Smagorinsky model (M1), a stress-similarity model (M2), and a gradient model (M3). The models exploit one-parameter second- or fourth-order filters of Pade type, which permit the cutoff wavenumber k(sub c) to be tuned independently of the grid increment (delta)x. The modeled (M) and exact (E) SGS-stresses are compared component-wise by correlation coefficients of the form C(E,M) computed over the entire three-dimensional fields. In general, M1 correlates poorly against exact stresses (C < 0.2), M3 correlates moderately well (C approx. 0.6), and M2 correlates remarkably well (0.8 < C < 1.0). Specifically, correlations C(E, M2) are high provided the grid and test filters are of the same order. Moreover, the highest correlations (C approx.= 1.0) result whenever the grid and test filters are identical (in both order and cutoff). Finally, present results reveal the exact SGS stresses obtained by grid filters of differing orders to be only moderately well correlated. Thus, in LES the model should not be specified independently of the filter.
NASA Astrophysics Data System (ADS)
Aleardi, Mattia
2018-01-01
We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
NASA Astrophysics Data System (ADS)
Roy, C.; Romanowicz, B. A.
2017-12-01
Monte Carlo methods are powerful approaches to solve nonlinear problems and are becoming very popular in Earth sciences. One reason being that, at first glance, no constraints or explicit regularization of model parameters are required. At second glance, one might realize that regularization is done through a prior. The choice of this prior, however, is subjective, and with its choice, unintended or undesired extra information can be injected into the problem. The principal criticism of Bayesian methods is that the prior can be "tuned" in order to get the expected solution. Consequently, detractors of the Bayesian method could easily argue that the solution is influenced by the form of the prior distribution, which choice is subjective. Hence, models obtained with Monte Carlo methods are still highly debated. Here we investigate the influence of a priori constraints (i.e., fixed crustal discontinuities) on the posterior probability distributions of estimated parameters, that is, vertical polarized shear velocity VSV and radial anisotropy ξ, in a transdimensional Bayesian inversion for continental lithospheric structure. We follow upon the work of Calò et al. (2016), who jointly inverted converted phases (P to S) without deconvolution and surface wave dispersion data, to obtain 1-D radial anisotropic shear wave velocity profiles in the North American craton. We aim at verifying whether the strong lithospheric layering found in the stable part of the craton is robust with respect to artifacts that might be caused by the methodology used. We test the hypothesis that the observed midlithospheric discontinuities result from (1) fixed crustal discontinuities in the reference model and (2) a fixed Vp/Vs ratio. The synthetic tests on two Earth models show that a fixed Vp/Vs ratio does not introduce artificial layering, even if the assumed value is slightly wrong. This is an important finding for real data inversion where the true value is not always available or accurate. However, fixing crustal discontinuities can lead to the introduction of spurious layering, and this is not recommended. Additionally, allowing the Vp/Vs ratio to vary does not help preventing that. Applying the modified approach resulting from these tests to two stations (FRB and FCC) in the North American craton, we confirm the presence of at least one midlithospheric low-velocity layer. We also confirm the difficulty of consistently detecting the lithosphere-asthenosphere boundary in the craton.
A method to combine spaceborne radar and radiometric observations of precipitation
NASA Astrophysics Data System (ADS)
Munchak, Stephen Joseph
This dissertation describes the development and application of a combined radar-radiometer rainfall retrieval algorithm for the Tropical Rainfall Measuring Mission (TRMM) satellite. A retrieval framework based upon optimal estimation theory is proposed wherein three parameters describing the raindrop size distribution (DSD), ice particle size distribution (PSD), and cloud water path (cLWP) are retrieved for each radar profile. The retrieved rainfall rate is found to be strongly sensitive to the a priori constraints in DSD and cLWP; thus, these parameters are tuned to match polarimetric radar estimates of rainfall near Kwajalein, Republic of Marshall Islands. An independent validation against gauge-tuned radar rainfall estimates at Melbourne, FL shows agreement within 2% which exceeds previous algorithms' ability to match rainfall at these two sites. The algorithm is then applied to two years of TRMM data over oceans to determine the sources of DSD variability. Three correlated sets of variables representing storm dynamics, background environment, and cloud microphysics are found to account for approximately 50% of the variability in the absolute and reflectivity-normalized median drop size. Structures of radar reflectivity are also identified and related to drop size, with these relationships being confirmed by ground-based polarimetric radar data from the North American Monsoon Experiment (NAME). Regional patterns of DSD and the sources of variability identified herein are also shown to be consistent with previous work documenting regional DSD properties. In particular, mid-latitude regions and tropical regions near land tend to have larger drops for a given reflectivity, whereas the smallest drops are found in the eastern Pacific Intertropical Convergence Zone. Due to properties of the DSD and rain water/cloud water partitioning that change with column water vapor, it is shown that increases in water vapor in a global warming scenario could lead to slight (1%) underestimates of a rainfall trends by radar but larger overestimates (5%) by radiometer algorithms. Further analyses are performed to compare tropical oceanic mean rainfall rates between the combined algorithm and other sources. The combined algorithm is 15% higher than the version 6 of the 2A25 radar-only algorithm and 6.6% higher than the Global Precipitation Climatology Project (GPCP) estimate for the same time-space domain. Despite being higher than these two sources, the combined total is not inconsistent with estimates of the other components of the energy budget given their uncertainties.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
Theoretical study of production of unique glasses in space
NASA Technical Reports Server (NTRS)
Larsen, D. C.
1974-01-01
Analytical functional relationships describing homogeneous nucleation and crystallization in various supercooled liquids were developed. The time and temperature dependent relationships of nucleation and crystallization (intrinsic properties) are being used to relate glass forming tendency to extrinsic parameters such as cooling rate through computer simulation. Single oxide systems are being studied initially to aid in developing workable kinetic models and to indicate the primary materials parameters affecting glass formation. The theory and analytical expressions developed for simple systems is then extended to complex oxide systems. A thorough understanding of nucleation and crystallization kinetics of glass forming systems provides a priori knowledge of the ability of a given system to form a glass.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
Localizing Ground Penetrating RADAR: A Step Towards Robust Autonomous Ground Vehicle Localization
2016-07-14
localization designed to complement existing approaches with a low sensitivity to failure modes of LIDAR, camera, and GPS/INS sensors due to its low...the detailed design and results from highway testing, which uses a simple heuristic for fusing LGPR estimates with a GPS/INS system. Cross-track... designed to enable a priori map-based local- ization. LGPR offers complementary capabilities to tradi- tional optics-based approaches to map-based
NASA Astrophysics Data System (ADS)
Kashiwabara, Takahito
Strong solutions of the non-stationary Navier-Stokes equations under non-linearized slip or leak boundary conditions are investigated. We show that the problems are formulated by a variational inequality of parabolic type, to which uniqueness is established. Using Galerkin's method and deriving a priori estimates, we prove global and local existence for 2D and 3D slip problems respectively. For leak problems, under no-leak assumption at t=0 we prove local existence in 2D and 3D cases. Compatibility conditions for initial states play a significant role in the estimates.
Invariant polarimetric contrast parameters of light with Gaussian fluctuations in three dimensions.
Réfrégier, Philippe; Roche, Muriel; Goudail, François
2006-01-01
We propose a rigorous definition of the minimal set of parameters that characterize the difference between two partially polarized states of light whose electric fields vary in three dimensions with Gaussian fluctuations. Although two such states are a priori defined by eighteen parameters, we demonstrate that the performance of processing tasks such as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by three scalar functions of these parameters. These functions define a "polarimetric contrast" that simplifies the analysis and the specification of processing techniques on polarimetric signals and images. This result can also be used to analyze the definition of the degree of polarization of a three-dimensional state of light with Gaussian fluctuations in comparison, with respect to its polarimetric contrast parameters, with a totally depolarized light. We show that these contrast parameters are a simple function of the degrees of polarization previously proposed by Barakat [Opt. Acta 30, 1171 (1983)] and Setälä et al. [Phys. Rev. Lett. 88, 123902 (2002)]. Finally, we analyze the dimension of the set of contrast parameters in different particular situations.
Tropospheric products of the second GOP European GNSS reprocessing (1996-2014)
NASA Astrophysics Data System (ADS)
Dousa, Jan; Vaclavovic, Pavel; Elias, Michal
2017-09-01
In this paper, we present results of the second reprocessing of all data from 1996 to 2014 from all stations in International Association of Geodesy (IAG) Reference Frame Sub-Commission for Europe (EUREF) Permanent Network (EPN) as performed at the Geodetic Observatory Pecný (GOP). While the original goal of this research was to ultimately contribute to the realization of a new European Terrestrial Reference System (ETRS), we also aim to provide a new set of GNSS (Global Navigation Satellite System) tropospheric parameter time series with possible applications to climate research. To achieve these goals, we improved a strategy to guarantee the continuity of these tropospheric parameters and we prepared several variants of troposphere modelling. We then assessed all solutions in terms of the repeatability of coordinates as an internal evaluation of applied models and strategies and in terms of zenith tropospheric delays (ZTDs) and horizontal gradients with those of the ERA-Interim numerical weather model (NWM) reanalysis. When compared to the GOP Repro1 (first EUREF reprocessing) solution, the results of the GOP Repro2 (second EUREF reprocessing) yielded improvements of approximately 50 and 25 % in the repeatability of the horizontal and vertical components, respectively, and of approximately 9 % in tropospheric parameters. Vertical repeatability was reduced from 4.14 to 3.73 mm when using the VMF1 mapping function, a priori ZHD (zenith hydrostatic delay), and non-tidal atmospheric loading corrections from actual weather data. Raising the elevation cut-off angle from 3 to 7° and then to 10° increased RMS from coordinates' repeatability, which was then confirmed by independently comparing GNSS tropospheric parameters with the NWM reanalysis. The assessment of tropospheric horizontal gradients with respect to the ERA-Interim revealed a strong sensitivity of estimated gradients to the quality of GNSS antenna tracking performance. This impact was demonstrated at the Mallorca station, where gradients systematically grew up to 5 mm during the period between 2003 and 2008, before this behaviour disappeared when the antenna at the station was changed. The impact of processing variants on long-term ZTD trend estimates was assessed at 172 EUREF stations with time series longer than 10 years. The most significant site-specific impact was due to the non-tidal atmospheric loading followed by the impact of changing the elevation cut-off angle from 3 to 10°. The other processing strategy had a very small or negligible impact on estimated trends.
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Williams, M.; Fisher, R. A.; Oleson, K. W.
2014-05-01
The empirical Ball-Berry stomatal conductance model is commonly used in Earth system models to simulate biotic regulation of evapotranspiration. However, the dependence of stomatal conductance (gs) on vapor pressure deficit (Ds) and soil moisture must both be empirically parameterized. We evaluated the Ball-Berry model used in the Community Land Model version 4.5 (CLM4.5) and an alternative stomatal conductance model that links leaf gas exchange, plant hydraulic constraints, and the soil-plant-atmosphere continuum (SPA) to numerically optimize photosynthetic carbon gain per unit water loss while preventing leaf water potential dropping below a critical minimum level. We evaluated two alternative optimization algorithms: intrinsic water-use efficiency (Δ An/Δ gs, the marginal carbon gain of stomatal opening) and water-use efficiency (Δ An/Δ El, the marginal carbon gain of water loss). We implemented the stomatal models in a multi-layer plant canopy model, to resolve profiles of gas exchange, leaf water potential, and plant hydraulics within the canopy, and evaluated the simulations using: (1) leaf analyses; (2) canopy net radiation, sensible heat flux, latent heat flux, and gross primary production at six AmeriFlux sites spanning 51 site-years; and (3) parameter sensitivity analyses. Without soil moisture stress, the performance of the SPA stomatal conductance model was generally comparable to or somewhat better than the Ball-Berry model in flux tower simulations, but was significantly better than the Ball-Berry model when there was soil moisture stress. Functional dependence of gs on soil moisture emerged from the physiological theory linking leaf water-use efficiency and water flow to and from the leaf along the soil-to-leaf pathway rather than being imposed a priori, as in the Ball-Berry model. Similar functional dependence of gs on Ds emerged from the water-use efficiency optimization. Sensitivity analyses showed that two parameters (stomatal efficiency and root hydraulic conductivity) minimized errors with the SPA stomatal conductance model. The critical stomatal efficiency for optimization (ι) was estimated from leaf trait datasets and is related to the slope parameter (g1) of the Ball-Berry model. The optimized parameter value was consistent with this estimate. Optimized root hydraulic conductivity was consistent with estimates from literature surveys. The two central concepts embodied in the stomatal model, that plants account for both water-use efficiency and for hydraulic safety in regulating stomatal conductance, imply a notion of optimal plant strategies and provide testable model hypotheses, rather than empirical descriptions of plant behavior.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John M.; Herren, Kenneth A.
2008-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John; Herren, Kenneth
2007-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
NASA Astrophysics Data System (ADS)
Zhang, H. F.; Chen, B. Z.; Machida, T.; Matsueda, H.; Sawa, Y.; Fukuyama, Y.; Langenfelds, R.; van der Schoot, M.; Xu, G.; Yan, J. W.; Cheng, M. L.; Zhou, L. X.; Tans, P. P.; Peters, W.
2014-06-01
Current estimates of the terrestrial carbon fluxes in Asia show large uncertainties particularly in the boreal and mid-latitudes and in China. In this paper, we present an updated carbon flux estimate for Asia ("Asia" refers to lands as far west as the Urals and is divided into boreal Eurasia, temperate Eurasia and tropical Asia based on TransCom regions) by introducing aircraft CO2 measurements from the CONTRAIL (Comprehensive Observation Network for Trace gases by Airline) program into an inversion modeling system based on the CarbonTracker framework. We estimated the averaged annual total Asian terrestrial land CO2 sink was about -1.56 Pg C yr-1 over the period 2006-2010, which offsets about one-third of the fossil fuel emission from Asia (+4.15 Pg C yr-1). The uncertainty of the terrestrial uptake estimate was derived from a set of sensitivity tests and ranged from -1.07 to -1.80 Pg C yr-1, comparable to the formal Gaussian error of ±1.18 Pg C yr-1 (1-sigma). The largest sink was found in forests, predominantly in coniferous forests (-0.64 ± 0.70 Pg C yr-1) and mixed forests (-0.14 ± 0.27 Pg C yr-1); and the second and third large carbon sinks were found in grass/shrub lands and croplands, accounting for -0.44 ± 0.48 Pg C yr-1 and -0.20 ± 0.48 Pg C yr-1, respectively. The carbon fluxes per ecosystem type have large a priori Gaussian uncertainties, and the reduction of uncertainty based on assimilation of sparse observations over Asia is modest (8.7-25.5%) for most individual ecosystems. The ecosystem flux adjustments follow the detailed a priori spatial patterns by design, which further increases the reliance on the a priori biosphere exchange model. The peak-to-peak amplitude of inter-annual variability (IAV) was 0.57 Pg C yr-1 ranging from -1.71 Pg C yr-1 to -2.28 Pg C yr-1. The IAV analysis reveals that the Asian CO2 sink was sensitive to climate variations, with the lowest uptake in 2010 concurrent with a summer flood and autumn drought and the largest CO2 sink in 2009 owing to favorable temperature and plentiful precipitation conditions. We also found the inclusion of the CONTRAIL data in the inversion modeling system reduced the uncertainty by 11% over the whole Asian region, with a large reduction in the southeast of boreal Eurasia, southeast of temperate Eurasia and most tropical Asian areas.
NASA Astrophysics Data System (ADS)
Nijzink, Remko; Hutton, Christopher; Pechlivanidis, Ilias; Capell, René; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; McGuire, Kevin; Savenije, Hubert; Hrachowitz, Markus
2016-12-01
The core component of many hydrological systems, the moisture storage capacity available to vegetation, is impossible to observe directly at the catchment scale and is typically treated as a calibration parameter or obtained from a priori available soil characteristics combined with estimates of rooting depth. Often this parameter is considered to remain constant in time. Using long-term data (30-40 years) from three experimental catchments that underwent significant land cover change, we tested the hypotheses that: (1) the root-zone storage capacity significantly changes after deforestation, (2) changes in the root-zone storage capacity can to a large extent explain post-treatment changes to the hydrological regimes and that (3) a time-dynamic formulation of the root-zone storage can improve the performance of a hydrological model.A recently introduced method to estimate catchment-scale root-zone storage capacities based on climate data (i.e. observed rainfall and an estimate of transpiration) was used to reproduce the temporal evolution of root-zone storage capacity under change. Briefly, the maximum deficit that arises from the difference between cumulative daily precipitation and transpiration can be considered as a proxy for root-zone storage capacity. This value was compared to the value obtained from four different conceptual hydrological models that were calibrated for consecutive 2-year windows.It was found that water-balance-derived root-zone storage capacities were similar to the values obtained from calibration of the hydrological models. A sharp decline in root-zone storage capacity was observed after deforestation, followed by a gradual recovery, for two of the three catchments. Trend analysis suggested hydrological recovery periods between 5 and 13 years after deforestation. In a proof-of-concept analysis, one of the hydrological models was adapted to allow dynamically changing root-zone storage capacities, following the observed changes due to deforestation. Although the overall performance of the modified model did not considerably change, in 51 % of all the evaluated hydrological signatures, considering all three catchments, improvements were observed when adding a time-variant representation of the root-zone storage to the model.In summary, it is shown that root-zone moisture storage capacities can be highly affected by deforestation and climatic influences and that a simple method exclusively based on climate data can not only provide robust, catchment-scale estimates of this critical parameter, but also reflect its time-dynamic behaviour after deforestation.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Prediction of composites behavior undergoing an ATP process through data-mining
NASA Astrophysics Data System (ADS)
Martin, Clara Argerich; Collado, Angel Leon; Pinillo, Rubén Ibañez; Barasinski, Anaïs; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco
2018-05-01
The need to characterize composite surfaces for distinct mechanical or physical processes leads to different manners of evaluate the state of the surface. During many manufacturing processes deformation occurs, thus hindering composite classification for fabrication processes. In this work we focus on the challenge of a priori identifying the surfaces' behavior in order to optimize manufacturing. We will propose and validate the curvature of the surface as a reliable parameter and we will develop a tool that allows the prediction of the surface behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Development of Genuine Neural Network Prototype Chip
1991-01-28
priori distribution is equivalent, and more readily visualized with a rank curve . The sonar signal data consisted of approximately 85% class Target and...15% class Clutter. For this reason, the rank curves for the class Clutter were used for device parameter analysis. R & D STATUS REPORT 1/28/91 N00014...the signal CLASSLD#. Four 10-bit class probabilities are available on the output bus (C0-C9, C16-C25, C32-C41 and C48- C57 ) at each clock cycle. A
Assessing the Importance of Prior Biospheric Fluxes on Inverse Model Estimates of CO2
NASA Astrophysics Data System (ADS)
Philip, S.; Johnson, M. S.; Potter, C. S.; Genovese, V. B.
2017-12-01
Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric sources/sinks. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in models having significant differences in the quantification of biospheric CO2 fluxes. Currently, atmospheric chemical transport models (CTM) and global climate models (GCM) use multiple different biospheric CO2 flux models resulting in large differences in simulating the global carbon cycle. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission was designed to allow for the improved understanding of the processes involved in the exchange of carbon between terrestrial ecosystems and the atmosphere, and therefore allowing for more accurate assessment of the seasonal/inter-annual variability of CO2. OCO-2 provides much-needed CO2 observations in data-limited regions allowing for the evaluation of model simulations of greenhouse gases (GHG) and facilitating global/regional estimates of "top-down" CO2 fluxes. We conduct a 4-D Variation (4D-Var) data assimilation with the GEOS-Chem (Goddard Earth Observation System-Chemistry) CTM using 1) OCO-2 land nadir and land glint retrievals and 2) global in situ surface flask observations to constrain biospheric CO2 fluxes. We apply different state-of-the-science year-specific CO2 flux models (e.g., NASA-CASA (NASA-Carnegie Ames Stanford Approach), CASA-GFED (Global Fire Emissions Database), Simple Biosphere Model version 4 (SiB-4), and LPJ (Lund-Postdam-Jena)) to assess the impact of "a priori" flux predictions to "a posteriori" estimates. We will present the "top-down" CO2 flux estimates for the year 2015 using OCO-2 and in situ observations, and a complete indirect evaluation of the a priori and a posteriori flux estimates using independent in situ observations. We will also present our assessment of the variability of "top-down" CO2 flux estimates when using different biospheric CO2 flux models. This work will improve our understanding of the global carbon cycle, specifically, how OCO-2 observations can be used to constrain biospheric CO2 flux model estimates.
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Ginn, T. R.; Rousseau-Gueutin, P.; Leray, S.
2015-06-01
While central in groundwater resources and contaminant fate, Transit Time Distributions (TTDs) are never directly accessible from field measurements but always deduced from a combination of tracer data and more or less involved models. We evaluate the predictive capabilities of approximate distributions (Lumped Parameter Models abbreviated as LPMs) instead of fully developed aquifer models. We develop a generic assessment methodology based on synthetic aquifer models to establish references for observable quantities as tracer concentrations and prediction targets as groundwater renewal times. Candidate LPMs are calibrated on the observable tracer concentrations and used to infer renewal time predictions, which are compared with the reference ones. This methodology is applied to the produced crystalline aquifer of Plœmeur (Brittany, France) where flows leak through a micaschists aquitard to reach a sloping aquifer where they radially converge to the producing well, issuing broad rather than multi-modal TTDs. One, two and three parameters LPMs were calibrated to a corresponding number of simulated reference anthropogenic tracer concentrations (CFC-11, 85Kr and SF6). Extensive statistical analysis over the aquifer shows that a good fit of the anthropogenic tracer concentrations is neither a necessary nor a sufficient condition to reach acceptable predictive capability. Prediction accuracy is however strongly conditioned by the use of a priori relevant LPMs. Only adequate LPM shapes yield unbiased estimations. In the case of Plœmeur, relevant LPMs should have two parameters to capture the mean and the standard deviation of the residence times and cover the first few decades [0; 50 years]. Inverse Gaussian and shifted exponential performed equally well for the wide variety of the reference TTDs from strongly peaked in recharge zones where flows are diverging to broadly distributed in more converging zones. When using two sufficiently different atmospheric tracers like CFC-11 and 85Kr, groundwater renewal time predictions are accurate at 1-5 years for estimating mean transit times of some decades (10-50 years). 1-parameter LPMs calibrated on a single atmospheric tracer lead to substantially larger errors of the order of 10 years, while 3-parameter LPMs calibrated with a third atmospheric tracers (SF6) do not improve the prediction capabilities. Based on a specific site, this study highlights the high predictive capacities of two atmospheric tracers on the same time range with sufficiently different atmospheric concentration chronicles.
The constitutive a priori and the distinction between mathematical and physical possibility
NASA Astrophysics Data System (ADS)
Everett, Jonathan
2015-11-01
This paper is concerned with Friedman's recent revival of the notion of the relativized a priori. It is particularly concerned with addressing the question as to how Friedman's understanding of the constitutive function of the a priori has changed since his defence of the idea in his Dynamics of Reason. Friedman's understanding of the a priori remains influenced by Reichenbach's initial defence of the idea; I argue that this notion of the a priori does not naturally lend itself to describing the historical development of space-time physics. Friedman's analysis of the role of the rotating frame thought experiment in the development of general relativity - which he suggests made the mathematical possibility of four-dimensional space-time a genuine physical possibility - has a central role in his argument. I analyse this thought experiment and argue that it is better understood by following Cassirer and placing emphasis on regulative principles. Furthermore, I argue that Cassirer's Kantian framework enables us to capture Friedman's key insights into the nature of the constitutive a priori.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
F18 EF5 PET/CT Imaging in Patients with Brain Metastases from Breast Cancer
2012-07-01
been demonstrated to improve local control and survival in select patients after WBRT . At present we do not have any method of determining a priori...relapse after WBRT would represent a significant step forward in the management of patients with brain metastases from breast cancer. We propose to...use a noninvasive imaging method to detect residual tumor hypoxia in patients receiving WBRT . Body: Task 1. To estimate the degree of hypoxia
Pose estimation of industrial objects towards robot operation
NASA Astrophysics Data System (ADS)
Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu
2017-10-01
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
NASA Astrophysics Data System (ADS)
Shimada, M.; Sato, C.; Hoshi, Y.; Yamada, Y.
2009-08-01
Our newly developed method using spatially and time-resolved reflectances can easily estimate the absorption coefficients of each layer in a two-layered medium if the thickness of the upper layer and the reduced scattering coefficients of the two layers are known a priori. We experimentally validated this method using phantoms and examined its possibility of estimating the absorption coefficients of the tissues in human heads. In the case of a homogeneous plastic phantom (polyacetal block), the absorption coefficient estimated by our method agreed well with that obtained by a conventional method. Also, in the case of two-layered phantoms, our method successfully estimated the absorption coefficients of the two layers. Furthermore, the absorption coefficients of the extracerebral and cerebral tissue inside human foreheads were estimated under the assumption that the human heads were two-layered media. It was found that the absorption coefficients of the cerebral tissues were larger than those of the extracerebral tissues.
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Sheng; Li, Hongyi; Huang, Maoyi
2014-07-21
Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United Statesmore » catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.« less
Automated determination of arterial input function for DCE-MRI of the prostate
NASA Astrophysics Data System (ADS)
Zhu, Yingxuan; Chang, Ming-Ching; Gupta, Sandeep
2011-03-01
Prostate cancer is one of the commonest cancers in the world. Dynamic contrast enhanced MRI (DCE-MRI) provides an opportunity for non-invasive diagnosis, staging, and treatment monitoring. Quantitative analysis of DCE-MRI relies on determination of an accurate arterial input function (AIF). Although several methods for automated AIF detection have been proposed in literature, none are optimized for use in prostate DCE-MRI, which is particularly challenging due to large spatial signal inhomogeneity. In this paper, we propose a fully automated method for determining the AIF from prostate DCE-MRI. Our method is based on modeling pixel uptake curves as gamma variate functions (GVF). First, we analytically compute bounds on GVF parameters for more robust fitting. Next, we approximate a GVF for each pixel based on local time domain information, and eliminate the pixels with false estimated AIFs using the deduced upper and lower bounds. This makes the algorithm robust to signal inhomogeneity. After that, according to spatial information such as similarity and distance between pixels, we formulate the global AIF selection as an energy minimization problem and solve it using a message passing algorithm to further rule out the weak pixels and optimize the detected AIF. Our method is fully automated without training or a priori setting of parameters. Experimental results on clinical data have shown that our method obtained promising detection accuracy (all detected pixels inside major arteries), and a very good match with expert traced manual AIF.
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625
Pose-free structure from motion using depth from motion constraints.
Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G
2011-10-01
Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Burton, S. P.; Liu, X.; Chemyakin, E.; Hostetler, C. A.; Stamnes, S.; Moore, R.; Sawamura, P.; Ferrare, R. A.; Knobelspiesse, K. D.
2015-12-01
There is considerable interest in retrieving aerosol effective radius, number concentration and refractive index from lidar measurements of extinction and backscatter at several wavelengths. The 3 backscatter + 2 extinction (3β+2α) combination is particularly important since the planned NASA Aerosol-Clouds-Ecosystem (ACE) mission recommends this combination of measurements. The 2nd-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β+2α measurements since 2012. Here we develop a deeper understanding of the information content and sensitivities of the 3β+2α system in terms of aerosol microphysical parameters of interest. We determine best case results using a retrieval-free methodology. We calculate information content and uncertainty metrics from Optimal Estimation techniques using only a simplified forward model look-up table, with no explicit inversion. Simplifications include spherical particles, mono-modal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, our results are applicable as a best case for all existing retrievals. Retrieval-dependent errors due to mismatch between the assumptions and true atmospheric aerosols are not included. The sensitivity metrics allow for identifying (1) information content of the measurements versus a priori information; (2) best-case error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. These results suggest that even in the best case, this retrieval system is underdetermined. Recommendations are given for addressing cross-talk between effective radius and number concentration. A potential solution to the under-determination problem is a combined active (lidar) and passive (polarimeter) retrieval, which is the subject of a new funded NASA project by our team.
GPS Water Vapor Tomography: First results from the ESCOMPTE Field Experiment
NASA Astrophysics Data System (ADS)
Masson, F.; Champollion, C.; Bouin, M.-N.; Walpersdorf, A.; van Baelen, J.; Doerflinger, E.; Bock, O.
2003-04-01
We develop a tomographic software to model the spatial distribution of the tropospheric water vapor from GPS data. First we present simulations based on a real GPS station distribution and simple tropospheric models, which prove the potentiality of the method. Second we apply the software to the ESCOMPTE data. During the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers has been operated for two weeks within a 20 km x 20 km area around Marseille (Southern France). The network extends from the sea level to the top of the Etoile chain (~700 m high). The input data are the slant delay values obtained by combining the estimated zenith delay values with the horizontal gradients. The effect of the initial tropospheric water vapor model, the number and thickness of the layers of the model, the a priori model and data covariance and some other parameters will be discussed. Simultaneously water vapor radiometer, solar spectrometer, Raman lidar and radiosondes have been deployed to get a data set usable for comparison with the tomographic inversion results and validation of the method. Comparison with meteorological models (MesoNH - Meteo-France) will be shown.
NASA Astrophysics Data System (ADS)
Ong, P. E.; K. C Huong, Audrey
2017-08-01
This paper presents the use of a point spectroscopy system to determine one’s transcutaneous bilirubin level using Modified Lambert Beer model and the developed fitting routine. This technique required a priori knowledge of extinction coefficient of bilirubin and hemoglobin components in the wavelength range of 440-500 nm for the prediction of the required parameter value. This work was conducted on different skin sites of six healthy Asians namely on the thenar region of the palm of their hand, back of the hand, posterior and anterior forearm. The obtained results revealed the lowest mean transcutaneous bilirubin concentration of 0.44±0.3 g/l predicted for palm site while the highest bilirubin level of 0.98±0.2 g/l was estimated for posterior forearm. These values were also compared with that presented in the literature. This study found considerably good consistency in the value predicted for different subjects especially at the thenar region of the palm. This work concluded that the proposed system and technique may be suitably served as an alternative means to noncontact and noninvasive measurement of one’s transcutaneous bilirubin level at palm site.