RECURSIVE PARAMETER ESTIMATION OF HYDROLOGIC MODELS
Proposed is a nonlinear filtering approach to recursive parameter estimation of conceptual watershed response models in state-space form. he conceptual model state is augmented by the vector of free parameters which are to be estimated from input-output data, and the extended Kal...
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
NASA Technical Reports Server (NTRS)
Choudhury, A. K.; Djalali, M.
1975-01-01
In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
NASA Astrophysics Data System (ADS)
Duong, Van-Huan; Bastawrous, Hany Ayad; Lim, KaiChin; See, Khay Wai; Zhang, Peng; Dou, Shi Xue
2015-11-01
This paper deals with the contradiction between simplicity and accuracy of the LiFePO4 battery states estimation in the electric vehicles (EVs) battery management system (BMS). State of charge (SOC) and state of health (SOH) are normally obtained from estimating the open circuit voltage (OCV) and the internal resistance of the equivalent electrical circuit model of the battery, respectively. The difficulties of the parameters estimation arise from their complicated variations and different dynamics which require sophisticated algorithms to simultaneously estimate multiple parameters. This, however, demands heavy computation resources. In this paper, we propose a novel technique which employs a simplified model and multiple adaptive forgetting factors recursive least-squares (MAFF-RLS) estimation to provide capability to accurately capture the real-time variations and the different dynamics of the parameters whilst the simplicity in computation is still retained. The validity of the proposed method is verified through two standard driving cycles, namely Urban Dynamometer Driving Schedule and the New European Driving Cycle. The proposed method yields experimental results that not only estimated the SOC with an absolute error of less than 2.8% but also characterized the battery model parameters accurately.
NASA Astrophysics Data System (ADS)
Xu, Zheyao; Qi, Naiming; Chen, Yukun
2015-12-01
Spacecraft simulators are widely used to study the dynamics, guidance, navigation, and control of a spacecraft on the ground. A spacecraft simulator can have three rotational degrees of freedom by using a spherical air-bearing to simulate a frictionless and micro-gravity space environment. The moment of inertia and center of mass are essential for control system design of ground-based three-axis spacecraft simulators. Unfortunately, they cannot be known precisely. This paper presents two approaches, i.e. a recursive least-squares (RLS) approach with tracking differentiator (TD) and Extended Kalman Filter (EKF) method, to estimate inertia parameters. The tracking differentiator (TD) filter the noise coupled with the measured signals and generate derivate of the measured signals. Combination of two TD filters in series obtains the angular accelerations that are required in RLS (TD-TD-RLS). Another method that does not need to estimate the angular accelerations is using the integrated form of dynamics equation. An extended TD (ETD) filter which can also generate the integration of the function of signals is presented for RLS (denoted as ETD-RLS). States and inertia parameters are estimated simultaneously using EKF. The observability is analyzed. All proposed methods are illustrated by simulations and experiments.
Recursive Bayesian electromagnetic refractivity estimation from radar sea clutter
NASA Astrophysics Data System (ADS)
Vasudevan, Sathyanarayanan; Anderson, Richard H.; Kraut, Shawn; Gerstoft, Peter; Rogers, L. Ted; Krolik, Jeffrey L.
2007-04-01
Estimation of the range- and height-dependent index of refraction over the sea surface facilitates prediction of ducted microwave propagation loss. In this paper, refractivity estimation from radar clutter returns is performed using a Markov state space model for microwave propagation. Specifically, the parabolic approximation for numerical solution of the wave equation is used to formulate the refractivity from clutter (RFC) problem within a nonlinear recursive Bayesian state estimation framework. RFC under this nonlinear state space formulation is more efficient than global fitting of refractivity parameters when the total number of range-varying parameters exceeds the number of basis functions required to represent the height-dependent field at a given range. Moreover, the range-recursive nature of the estimator can be easily adapted to situations where the refractivity modeling changes at discrete ranges, such as at a shoreline. A fast range-recursive solution for obtaining range-varying refractivity is achieved by using sequential importance sampling extensions to state estimation techniques, namely, the forward and Viterbi algorithms. Simulation and real data results from radar clutter collected off Wallops Island, Virginia, are presented which demonstrate the ability of this method to produce propagation loss estimates that compare favorably with ground truth refractivity measurements.
Recursive least square vehicle mass estimation based on acceleration partition
NASA Astrophysics Data System (ADS)
Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong
2014-05-01
Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.
COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES
K. MUSKE; J. HOWSE
2000-09-01
This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.
Robust recursive impedance estimation for automotive lithium-ion batteries
NASA Astrophysics Data System (ADS)
Fridholm, Björn; Wik, Torsten; Nilsson, Magnus
2016-02-01
Recursive algorithms, such as recursive least squares (RLS) or Kalman filters, are commonly used in battery management systems to estimate the electrical impedance of the battery cell. However, these algorithms can in some cases run into problems with bias and even divergence of the estimates. This article illuminates problems that can arise in the online estimation using recursive methods, and lists modifications to handle these issues. An algorithm is also proposed that estimates the impedance by separating the problem in two parts; one estimating the ohmic resistance with an RLS approach, and another one where the dynamic effects are estimated using an adaptive Kalman filter (AKF) that is novel in the battery field. The algorithm produces robust estimates of ohmic resistance and time constant of the battery cell in closed loop with SoC estimation, as demonstrated by both in simulations and with experimental data from a lithium-ion battery cell.
Recursive bias estimation for high dimensional smoothers
Hengartner, Nicolas W; Matzner-lober, Eric; Cornillon, Pierre - Andre
2008-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Vision-based recursive estimation of rotorcraft obstacle locations
NASA Technical Reports Server (NTRS)
Leblanc, D. J.; Mcclamroch, N. H.
1992-01-01
The authors address vision-based passive ranging during nap-of-the-earth (NOE) rotorcraft flight. They consider the problem of estimating the relative location of identifiable features on nearby obstacles, assuming a sequence of noisy camera images and imperfect measurements of the camera's translation and rotation. An iterated extended Kalman filter is used to provide recursive range estimation. The correspondence problem is simplified by predicting and tracking each feature's image within the Kalman filter framework. Simulation results are presented which show convergent estimates and generally successful feature point tracking. Estimation performance degrades for features near the optical axis and for accelerating motions. Image tracking is also sensitive to angular rate.
A Precision Recursive Estimate for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B.
1980-01-01
A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.
Recursive Estimation for the Tracking of Radioactive Sources
Howse, J.W.; Muske, K.R.; Ticknor, L.O.
1999-02-01
This paper describes a recursive estimation algorithm used for tracking the physical location of radioactive sources in real-time as they are moved around in a facility. The al- gorithm is a nonlinear least squares estimation that mini- mizes the change in, the source location and the deviation between measurements and model predictions simultane- ously. The measurements used to estimate position consist of four count rates reported by four different gamma ray de tectors. There is an uncertainty in the source location due to the variance of the detected count rate. This work repre- sents part of a suite of tools which will partially automate security and safety assessments, allow some assessments to be done remotely, and provide additional sensor modalities with which to make assessments.
NASA Astrophysics Data System (ADS)
Ni, Zhiyu; Mu, Ruinan; Xun, Guangbin; Wu, Zhigang
2016-01-01
The rotation of spacecraft flexible appendage may cause changes in modal parameters. For this time-varying system, the computation cost of the frequently-used singular value decomposition (SVD) identification method is high. Some control problems, such as the self-adaptive control, need the latest modal parameters to update the controller parameters in time. In this paper, the projection approximation subspace tracking (PAST) recursive algorithm is applied as an alternative method to identify the time-varying modal parameters. This method avoids the SVD by signal subspace projection and improves the computational efficiency. To verify the ability of this recursive algorithm in spacecraft modal parameters identification, a spacecraft model with rapid rotational appendage, Soil Moisture Active/Passive (SMAP) satellite, is established, and the time-varying modal parameters of the satellite are identified recursively by designing the input and output signals. The results illustrate that this recursive algorithm can obtain the modal parameters in the high signal noise ratio (SNR) and it has better computational efficiency than the SVD method. Moreover, to improve the identification precision of this recursive algorithm in the low SNR, the wavelet de-noising technology is used to decrease the effect of noises.
NASA Astrophysics Data System (ADS)
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness. PMID:25576579
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2009-12-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Recursive starlight and bias estimation for high-contrast imaging with an extended Kalman filter
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.
2016-01-01
For imaging faint exoplanets and disks, a coronagraph-equipped observatory needs focal plane wavefront correction to recover high contrast. The most efficient correction methods iteratively estimate the stellar electric field and suppress it with active optics. The estimation requires several images from the science camera per iteration. To maximize the science yield, it is desirable both to have fast wavefront correction and to utilize all the correction images for science target detection. Exoplanets and disks are incoherent with their stars, so a nonlinear estimator is required to estimate both the incoherent intensity and the stellar electric field. Such techniques assume a high level of stability found only on space-based observatories and possibly ground-based telescopes with extreme adaptive optics. In this paper, we implement a nonlinear estimator, the iterated extended Kalman filter (IEKF), to enable fast wavefront correction and a recursive, nearly-optimal estimate of the incoherent light. In Princeton's High Contrast Imaging Laboratory, we demonstrate that the IEKF allows wavefront correction at least as fast as with a Kalman filter and provides the most accurate detection of a faint companion. The nonlinear IEKF formalism allows us to pursue other strategies such as parameter estimation to improve wavefront correction.
Recursive bias estimation for high dimensional regression smoothers
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Recursive estimation techniques for detection of small objects in infrared image data
NASA Astrophysics Data System (ADS)
Zeidler, J. R.; Soni, T.; Ku, W. H.
1992-04-01
This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.
NASA Technical Reports Server (NTRS)
Sidar, M.
1976-01-01
The problem of identifying constant and variable parameters in multi-input, multi-output, linear and nonlinear systems is considered, using the maximum likelihood approach. An iterative algorithm, leading to recursive identification and tracking of the unknown parameters and the noise covariance matrix, is developed. Agile tracking and accurate and unbiased identified parameters are obtained. Necessary conditions for a globally asymptotically stable identification process are provided; the conditions proved to be useful and efficient. Among different cases studied, the stability derivatives of an aircraft were identified and some of the results are shown as examples.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
NASA Astrophysics Data System (ADS)
Calcagnile, Lucio M.; Galatolo, Stefano; Menconi, Giulia
2010-12-01
We numerically test the method of non-sequential recursive pair substitutions to estimate the entropy of an ergodic source. We compare its performance with other classical methods to estimate the entropy (empirical frequencies, return times, and Lyapunov exponent). We have considered as a benchmark for the methods several systems with different statistical properties: renewal processes, dynamical systems provided and not provided with a Markov partition, and slow or fast decay of correlations. Most experiments are supported by rigorous mathematical results, which are explained in the paper.
NASA Astrophysics Data System (ADS)
Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong
2016-07-01
This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.
Parameter estimation through ignorance.
Du, Hailiang; Smith, Leonard A
2012-07-01
Dynamical modeling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A relatively simple method of parameter estimation for nonlinear systems is introduced, based on variations in the accuracy of probability forecasts. It is illustrated on the logistic map, the Henon map, and the 12-dimensional Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This approach is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors or the ability of the model to shadow the observations. Direct measures of inadequacy in the model, the "implied ignorance," and the information deficit are introduced. PMID:23005513
Phenological Parameters Estimation Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.
2010-01-01
The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE
NASA Astrophysics Data System (ADS)
Li, Lei; Yang, Kecheng; Li, Wei; Wang, Wanyan; Guo, Wenping; Xia, Min
2016-07-01
Conventional regularization methods have been widely used for estimating particle size distribution (PSD) in single-angle dynamic light scattering, but they could not be used directly in multiangle dynamic light scattering (MDLS) measurements for lack of accurate angular weighting coefficients, which greatly affects the PSD determination and none of the regularization methods perform well for both unimodal and multimodal distributions. In this paper, we propose a recursive regularization method-Recursion Nonnegative Tikhonov-Phillips-Twomey (RNNT-PT) algorithm for estimating the weighting coefficients and PSD from MDLS data. This is a self-adaptive algorithm which distinguishes characteristics of PSDs and chooses the optimal inversion method from Nonnegative Tikhonov (NNT) and Nonnegative Phillips-Twomey (NNPT) regularization algorithm efficiently and automatically. In simulations, the proposed algorithm was able to estimate the PSDs more accurately than the classical regularization methods and performed stably against random noise and adaptable to both unimodal and multimodal distributions. Furthermore, we found that the six-angle analysis in the 30-130° range is an optimal angle set for both unimodal and multimodal PSDs.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Parameter estimation of hydrologic models using data assimilation
NASA Astrophysics Data System (ADS)
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Recursive Bayesian filtering framework for lithium-ion cell state estimation
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Gambhire, Priya; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-02-01
Robust battery management system is critical for a safe and reliable electric vehicle operation. One of the most important functions of the battery management system is to accurately estimate the battery state using minimal on-board instrumentation. This paper presents a recursive Bayesian filtering framework for on-board battery state estimation by assimilating measurables like cell voltage, current and temperature with physics-based reduced order model (ROM) predictions. The paper proposes an improved Particle filtering algorithm for implementation of the framework, and compares its performance against the unscented Kalman filter. Functionality of the proposed framework is demonstrated for a commercial NCA/C cell state estimation at different operating conditions including constant current discharge at room and low temperatures, hybrid power pulse characterization (HPPC) and urban driving schedule (UDDS) protocols. In addition to accurate voltage prediction, the electrochemical nature of ROM enables drawing of physical insights into the cell behavior. Advantages of using electrode concentrations over conventional Coulomb counting for accessible capacity estimation are discussed. In addition to the mean state estimation, the framework also provides estimation of the associated confidence bounds that are used to establish predictive capability of the proposed framework.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1999-01-01
A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time.
NASA Technical Reports Server (NTRS)
Hocking, W. K.
1989-01-01
The objective of any radar experiment is to determine as much as possible about the entities which scatter the radiation. This review discusses many of the various parameters which can be deduced in a radar experiment, and also critically examines the procedures used to deduce them. Methods for determining the mean wind velocity, the RMS fluctuating velocities, turbulence parameters, and the shapes of the scatterers are considered. Complications with these determinations are discussed. It is seen throughout that a detailed understanding of the shape and cause of the scatterers is important in order to make better determinations of these various quantities. Finally, some other parameters, which are less easily acquired, are considered. For example, it is noted that momentum fluxes due to buoyancy waves and turbulence can be determined, and on occasions radars can be used to determine stratospheric diffusion coefficients and even temperature profiles in the atmosphere.
On the structural limitations of recursive digital filters for base flow estimation
NASA Astrophysics Data System (ADS)
Su, Chun-Hsu; Costelloe, Justin F.; Peterson, Tim J.; Western, Andrew W.
2016-06-01
Recursive digital filters (RDFs) are widely used for estimating base flow from streamflow hydrographs, and various forms of RDFs have been developed based on different physical models. Numerical experiments have been used to objectively evaluate their performance, but they have not been sufficiently comprehensive to assess a wide range of RDFs. This paper extends these studies to understand the limitations of a generalized RDF method as a pathway for future field calibration. Two formalisms are presented to generalize most existing RDFs, allowing systematic tuning of their complexity. The RDFs with variable complexity are evaluated collectively in a synthetic setting, using modeled daily base flow produced by Li et al. (2014) from a range of synthetic catchments simulated with HydroGeoSphere. Our evaluation reveals that there are optimal RDF complexities in reproducing base flow simulations but shows that there is an inherent physical inconsistency within the RDF construction. Even under the idealized setting where true base flow data are available to calibrate the RDFs, there is persistent disagreement between true and estimated base flow over catchments with small base flow components, low saturated hydraulic conductivity of the soil and larger surface runoff. The simplest explanation is that low base flow "signal" in the streamflow data is hard to distinguish, although more complex RDFs can improve upon the simpler Eckhardt filter at these catchments.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature. PMID:23297775
User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B. P.
1982-01-01
PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.
NASA Astrophysics Data System (ADS)
Li, Qiang; Xing, Zisheng; Danielescu, Serban; Li, Sheng; Jiang, Yefang; Meng, Fan-Rui
2014-04-01
Estimation of baseflow and groundwater recharge rates is important for hydrological analysis and modelling. A new approach which combines recursive digital filter (RDF) model with conductivity mass balance (CMB) method was considered to be reliable for baseflow separation because the combined method takes advantages of the reduced data requirement for RDF method and the reliability of CMB method. However, it is not clear what the minimum data requirements for producing acceptable estimates of the RDF model parameters are. In this study, 19-year record of stream discharge and water conductivity collected from the Black Brook Watershed (BBW), NB, Canada were used to test the combined baseflow separation method and assess the variability of parameters in the model over seasons. The data requirements and potential bias in estimated baseflow index (BFI) were evaluated using conductivity data for different seasons and/or resampled data segments at various sampling durations. Results indicated that the data collected during ground-frozen season are more suitable to estimate baseflow conductivity (Cbf) and data during snow-melting period are more suitable to estimate runoff conductivity (Cro). Relative errors of baseflow estimation were inversely proportional to the number of conductivity data records. A minimum of six-month discharge and conductivity data is required to obtain reliable parameters for current method with acceptable errors. We further found that the average annual recharge rate for the BBW was 322 mm in the past twenty years.
Parameter Estimation Using VLA Data
NASA Astrophysics Data System (ADS)
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
A landscape-based cluster analysis using recursive search instead of a threshold parameter.
Gladwin, Thomas E; Vink, Matthijs; Mars, Roger B
2016-01-01
Cluster-based analysis methods in neuroimaging provide control of whole-brain false positive rates without the need to conservatively correct for the number of voxels and the associated false negative results. The current method defines clusters based purely on shapes in the landscape of activation, instead of requiring the choice of a statistical threshold that may strongly affect results. Statistical significance is determined using permutation testing, combining both size and height of activation. A method is proposed for dealing with relatively small local peaks. Simulations confirm the method controls the false positive rate and correctly identifies regions of activation. The method is also illustrated using real data. •A landscape-based method to define clusters in neuroimaging data avoids the need to pre-specify a threshold to define clusters.•The implementation of the method works as expected, based on simulated and real data.•The recursive method used for defining clusters, the method used for combining clusters, and the definition of the "value" of a cluster may be of interest for future variations. PMID:27489780
Estimation of pharmacokinetic model parameters.
Timcenko, A; Reich, D L; Trunfio, G
1995-01-01
This paper addresses the problem of estimating the depth of anesthesia in clinical practice where many drugs are used in combination. The aim of the project is to use pharmacokinetically-derived data to predict episodes of light anesthesia. The weighted linear combination of anesthetic drug concentrations was computed using a stochastic pharmacokinetic model. The clinical definition of light anesthesia was based on the hemodynamic consequences of autonomic nervous system responses to surgical stimuli. A rule-based expert system was used to review anesthesia records to determine instances of light anesthesia using hemodynamic criteria. It was assumed that light anesthesia was a direct consequence of the weighted linear combination of drug concentrations in the patient's body that decreased below a certain threshold. We augmented traditional two-compartment models with a stochastic component of anesthetics' concentrations to compensate for interpatient pharmacokinetic and pharmacodynamic variability. A cohort of 532 clinical anesthesia cases was examined and parameters of two compartment pharmacokinetic models for 6 intravenously administered anesthetic drugs (fentanyl, thiopenthal, morphine, propofol, midazolam, ketamine) were estimated, as well as the parameters for 2 inhalational anesthetics (N2O and isoflurane). These parameters were then prospectively applied to 22 cases that were not used for parameter estimation, and the predictive ability of the pharmacokinetic model was determined. The goal of the study is the development of a pharmacokinetic model that will be useful in predicting light anesthesia in the clinically relevant circumstance where many drugs are used concurrently. PMID:8563327
NASA Technical Reports Server (NTRS)
Harman, Richard R.
2006-01-01
The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.
Parameter estimation for transformer modeling
NASA Astrophysics Data System (ADS)
Cho, Sung Don
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. PMID:26871742
Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation
NASA Astrophysics Data System (ADS)
Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao
2015-12-01
Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.
ERIC Educational Resources Information Center
Olson, Alton T.
1989-01-01
Discusses the use of the recursive method to permutations of n objects and a problem making c cents in change using pennies and nickels when order is important. Presents a LOGO program for the examples. (YP)
Adaptable Iterative and Recursive Kalman Filter Schemes
NASA Technical Reports Server (NTRS)
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
SURFACE VOLUME ESTIMATES FOR INFILTRATION PARAMETER ESTIMATION
Technology Transfer Automated Retrieval System (TEKTRAN)
Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. These calculations are often performed by estimating upstream depth with a normal depth formula. That assumption can result in significant volume estimation errors when upstream flow d...
Watumull, Jeffrey; Hauser, Marc D; Roberts, Ian G; Hornstein, Norbert
2014-01-01
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language-the faculty of language in the narrow sense (FLN)-is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded-existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled-and potentially unbounded-expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID:24409164
Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert
2014-01-01
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
Method for estimating solubility parameter
NASA Technical Reports Server (NTRS)
Lawson, D. D.; Ingham, J. D.
1973-01-01
Semiempirical correlations have been developed between solubility parameters and refractive indices for series of model hydrocarbon compounds and organic polymers. Measurement of intermolecular forces is useful for assessment of material compatibility, glass-transition temperature, and transport properties.
Parameter estimation by genetic algorithms
Reese, G.M.
1993-11-01
Test/Analysis correlation, or structural identification, is a process of reconciling differences in the structural dynamic models constructed analytically (using the finite element (FE) method) and experimentally (from modal test). This is a methodology for assessing the reliability of the computational model, and is very important in building models of high integrity, which may be used as predictive tools in design. Both the analytic and experimental models evaluate the same quantities: the natural frequencies (or eigenvalues, ({omega}{sub i}), and the mode shapes (or eigenvectors, {var_phi}). In this paper, selected frequencies are reconciled in the two models by modifying physical parameters in the FE model. A variety of parameters may be modified such as the stiffness of a joint member or the thickness of a plate. Engineering judgement is required to identify important frequencies, and to characterize the uncertainty of the model design parameters.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
Estimators for overdetermined linear Stokes parameters
NASA Astrophysics Data System (ADS)
Furey, John
2016-05-01
The mathematics of estimating overdetermined polarization parameters is worked out within the context of the inverse modeling of linearly polarized light, and as the primary new result the general solution is presented for estimators of the linear Stokes parameters from any number of measurements. The utility of the general solution is explored in several illustrative examples including the canonical case of two orthogonal pairs. In addition to the actual utility of these estimators in Stokes analysis, the pedagogical discussion illustrates many of the considerations involved in solving the ill-posed problem of overdetermined parameter estimation. Finally, suggestions are made for using a rapidly rotating polarizer for continuously updating polarization estimates.
Estimation of ground motion parameters
Boore, David M.; Joyner, W.B.; Oliver, A.A.; Page, R.A.
1978-01-01
Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. A subset of the data (from the San Fernando earthquake) is used to assess the effects of structural size and of geologic site conditions on peak motions recorded at the base of structures. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. The peak acceleration tends to b3e less and the peak velocity and displacement tend to be greater on the average at the base of large structures than at the base of small structures. In the distance range used in the regression analysis (15-100 km) the values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Some consideration is given to the prediction of ground motions at close distances where there are insufficient recorded data points. As might be expected from the lack of data, published relations for predicting peak horizontal acceleration give widely divergent estimates at close distances (three well known relations predict accelerations between 0.33 g to slightly over 1 g at a distance of 5 km from a magnitude 6.5 earthquake). After considering the physics of the faulting process, the few available data close to faults, and the modifying effects of surface topography, at the present time it would be difficult to accept estimates less than about 0.8 g, 110 cm/s, and 40 cm, respectively, for the mean values of peak acceleration, velocity, and displacement at rock sites
ESTIM: A parameter estimation computer program: Final report
Hills, R.G.
1987-08-01
The computer code, ESTIM, enables subroutine versions of existing simulation codes to be used to estimate model parameters. Nonlinear least squares techniques are used to find the parameter values that result in a best fit between measurements made in the simulation domain and the simulation code's prediction of these measurements. ESTIM utilizes the non-linear least square code DQED (Hanson and Krogh (1982)) to handle the optimization aspects of the estimation problem. In addition to providing weighted least squares estimates, ESTIM provides a propagation of variance analysis. A subroutine version of COYOTE (Gartling (1982)) is provided. The use of ESTIM with COYOTE allows one to estimate the thermal property model parameters that result in the best agreement (in a least squares sense) between internal temperature measurements and COYOTE's predictions of these internal temperature measurements. We demonstrate the use of ESTIM through several example problems which utilize the subroutine version of COYOTE.
Estimation of ground motion parameters
Boore, David M.; Oliver, Adolph A., III; Page, Robert A.; Joyner, William B.
1978-01-01
Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. Data from the San Fernando earthquake are examined to assess the effects of associated structures and of geologic site conditions on peak recorded motions. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity, and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. Values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Three recently published relationships for predicting peak horizontal acceleration are compared and discussed. Considerations are reviewed relevant to ground motion predictions at close distances where there are insufficient recorded data points.
Estimation for large non-centrality parameters
NASA Astrophysics Data System (ADS)
Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco
2016-06-01
We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Robust parameter estimation method for bilinear model
NASA Astrophysics Data System (ADS)
Ismail, Mohd Isfahani; Ali, Hazlina; Yahaya, Sharipah Soaad S.
2015-12-01
This paper proposed the method of parameter estimation for bilinear model, especially on BL(1,0,1,1) model without and with the presence of additive outlier (AO). In this study, the estimated parameters for BL(1,0,1,1) model are using nonlinear least squares (LS) method and also through robust approaches. The LS method employs the Newton-Raphson (NR) iterative procedure in estimating the parameters of bilinear model, but, using LS in estimating the parameters can be affected with the occurrence of outliers. As a solution, this study proposed robust approaches in dealing with the problem of outliers specifically on AO in BL(1,0,1,1) model. In robust estimation method, for improvement, we proposed to modify the NR procedure with robust scale estimators. We introduced two robust scale estimators namely median absolute deviation (MADn) and Tn in linear autoregressive model, AR(1) that be adequate and suitable for bilinear BL(1,0,1,1) model. We used the estimated parameter value in AR(1) model as an initial value in estimating the parameter values of BL(1,0,1,1) model. The investigation of the performance of LS and robust estimation methods in estimating the coefficients of BL(1,0,1,1) model is carried out through simulation study. The achievement of performance for both methods will be assessed in terms of bias values. Numerical results present that, the robust estimation method performs better than LS method in estimating the parameters without and with the presence of AO.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J.; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
A Scale-Invariant Treatment for Recursive Path Models.
ERIC Educational Resources Information Center
McDonald, Roderick P.; And Others
1993-01-01
A reparameterization is formulated that yields estimates of scale-invariant parameters in recursive path models with latent variables, and (asymptotically) correct standard errors, without the use of constrained optimization. The method is based on the logical structure of the reticular action model. (Author)
Parameter estimation for distributed parameter models of complex, flexible structures
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr.
1991-01-01
Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.
DEB parameters estimation for Mytilus edulis
NASA Astrophysics Data System (ADS)
Saraiva, S.; van der Meer, J.; Kooijman, S. A. L. M.; Sousa, T.
2011-11-01
The potential of DEB theory to simulate an organism life-cycle has been demonstrated at numerous occasions. However, its applicability requires parameter estimates that are not easily obtained by direct observations. During the last years various attempts were made to estimate the main DEB parameters for bivalve species. The estimation procedure was by then, however, rather ad-hoc and based on additional assumptions that were not always consistent with the DEB theory principles. A new approach has now been developed - the covariation method - based on simultaneous minimization of the weighted sum of squared deviations between data sets and model predictions in one single procedure. This paper presents the implementation of this method to estimate the DEB parameters for the blue mussel Mytilus edulis, using several data sets from the literature. After comparison with previous trials we conclude that the parameter set obtained by the covariation method leads to a better fit between model and observations, with potentially more consistency and robustness.
Effects of Structural Errors on Parameter Estimates
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Paper introduces concept of near equivalence in probability between different parameters or mathematical models of physical system. One in series of papers, each establishes different part of rigorous theory of mathematical modeling based on concepts of structural error, identifiability, and equivalence. This installment focuses upon effects of additive structural errors on degree of bias in estimates parameters.
MODFLOW-Style parameters in underdetermined parameter estimation.
D'Oria, Marco; Fienen, Michael N
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. PMID:21352210
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael J.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, M.; Fienen, M.N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.
Reionization history and CMB parameter estimation
Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y. E-mail: gnedin@fnal.edu
2013-05-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.
GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by
Estimation of Damage Preference From Strike Parameters
Canavan, G.H.
1998-09-11
Estimation of an opponent's damage preference is illustrated by discussing the sensitivity of stability indices and strike parameters to it and inverting the results to study the sensitivity of estimates to uncertainties in strikes. Costs and stability indices do not generally have the monotonicity and sensitivity needed to support accurate estimation. First and second strikes do. Second strikes also have proportionality, although they are not unambiguously interpretable. First strikes are observable and have the greatest overall power for estimation, whether linear or numerical solutions are used.
Estimation of saxophone reed parameters during playing.
Muñoz Arancón, Alberto; Gazengel, Bruno; Dalmont, Jean-Pierre; Conan, Ewen
2016-05-01
An approach for the estimation of single reed parameters during playing, using an instrumented mouthpiece and an iterative method, is presented. Different physical models describing the reed tip movement are tested in the estimation method. The uncertainties of the sensors installed on the mouthpiece and the limits of the estimation method are studied. A tenor saxophone reed is mounted on this mouthpiece connected to a cylinder, played by a musician, and characterized at different dynamic levels. Results show that the method can be used to estimate the reed parameters with a small error for low and medium sound levels (piano and mezzoforte dynamic levels). The analysis reveals that the complexity of the physical model describing the reed behavior must increase with dynamic levels. For medium level dynamics, the most relevant physical model assumes that the reed is an oscillator with non-linear stiffness and damping, the effect of mass (inertia) being very small. PMID:27250168
Parameter inference with estimated covariance matrices
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2016-02-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
LISA Parameter Estimation using Numerical Merger Waveforms
NASA Technical Reports Server (NTRS)
Thorpe, J. I.; McWilliams, S.; Baker, J.
2008-01-01
Coalescing supermassive black holes are expected to provide the strongest sources for gravitational radiation detected by LISA. Recent advances in numerical relativity provide a detailed description of the waveforms of such signals. We present a preliminary study of LISA's sensitivity to waveform parameters using a hybrid numerical/analytic waveform describing the coalescence of two equal-mass, nonspinning black holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the waveform parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10(exp 6) deg M solar mass at a redshift of z is approximately 1 were found to decrease by a factor of slightly more than two when the merger was included.
Bayesian parameter estimation for effective field theories
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Klco, Natalie; Furnstahl, Richard; Phillips, Daniel; Thapilaya, Arbin
2015-10-01
We present a procedure based on Bayesian statistics for effective field theory (EFT) parameter estimation from experimental or lattice data. The extraction of low-energy constants (LECs) is guided by physical principles such as naturalness in a quantifiable way and various sources of uncertainty are included by the specification of Bayesian priors. Special issues for EFT parameter estimation are demonstrated using representative model problems, and a set of diagnostics is developed to isolate and resolve these issues. We apply the framework to the extraction of the LECs of the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
A novel multistage estimation of signal parameters
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1990-01-01
A multistage estimation scheme is presented for estimating the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc. Such a situation arises, for example, in the case of the Global Positioning Systems (GPS). In the proposed scheme, the first-stage estimator operates as a coarse estimator of the frequency and its derivatives, resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency (an event termed cycle slip). The second stage of the estimator operates on the error signal available from the first stage, refining the overall estimates, and in the process also reduces the number of cycle slips. The first-stage algorithm is a modified least-squares algorithm operating on the differential signal model and referred to as differential least squares (DLS). The second-stage algorithm is an extended Kalman filter, which yields the estimate of the phase as well as refining the frequency estimate. A major advantage of the is a reduction in the threshold for the received carrier power-to-noise power spectral density ratio (CNR) as compared with the threshold achievable by either of the algorithms alone.
ZASPE: Zonal Atmospheric Stellar Parameters Estimator
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordan, Andres; Hartman, Joel; Bakos, Gaspar
2016-07-01
ZASPE (Zonal Atmospheric Stellar Parameters Estimator) computes the atmospheric stellar parameters (Teff, log(g), [Fe/H] and vsin(i)) from echelle spectra via least squares minimization with a pre-computed library of synthetic spectra. The minimization is performed only in the most sensitive spectral zones to changes in the atmospheric parameters. The uncertainities and covariances computed by ZASPE assume that the principal source of error is the systematic missmatch between the observed spectrum and the sythetic one that produces the best fit. ZASPE requires a grid of synthetic spectra and can use any pre-computed library minor modifications.
New approaches to estimation of magnetotelluric parameters
Egbert, G.D.
1991-01-01
Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.
Helbig, Marko; Schwab, Karin; Leistritz, Lutz; Eiselt, Michael; Witte, Herbert
2006-10-15
The quantification of transient quadratic phase couplings (QPC) by means of time-variant bispectral analysis is a useful approach to explain several interrelations between signal components. A generalized recursive estimation approach for 3rd-order time-frequency distributions (3rd-order TFD) is introduced. Based on 3rd-order TFD, time-variant estimations of biamplitude (BA), bicoherence (BC) and phase bicoherence (PBC) can be derived. Different smoothing windows and local moment functions for an optimization of the estimation properties are investigated and compared. The methods are applied to signal simulations and EEG signals, and it can be shown that the new time-variant bispectral analysis results in a reliable quantification of QPC in the tracé alternant EEG of healthy neonates. PMID:16737739
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods. PMID:16383795
Estimating physiological skin parameters from hyperspectral signatures.
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers. PMID:23722495
Estimating physiological skin parameters from hyperspectral signatures
NASA Astrophysics Data System (ADS)
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
Aquifer parameter estimation from surface resistivity data.
Niwas, Sri; de Lima, Olivar A L
2003-01-01
This paper is devoted to the additional use, other than ground water exploration, of surface geoelectrical sounding data for aquifer hydraulic parameter estimation. In a mesoscopic framework, approximated analytical equations are developed separately for saline and for fresh water saturations. A few existing useful aquifer models, both for clean and shaley sandstones, are discussed in terms of their electrical and hydraulic effects, along with the linkage between the two. These equations are derived for insight and physical understanding of the phenomenon. In a macroscopic scale, a general aquifer model is proposed and analytical relations are derived for meaningful estimation, with a higher level of confidence, of hydraulic parameter from electrical parameters. The physical reasons for two different equations at the macroscopic level are explicitly explained to avoid confusion. Numerical examples from existing literature are reproduced to buttress our viewpoint. PMID:12533080
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Blind estimation of compartmental model parameters.
Di Bella, E V; Clackdoyle, R; Gullberg, G T
1999-03-01
Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions--the tissue time-activity curves--are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min(-1) and washout rates between 0.2 and 1.0 min(-1). The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies
Cosmological parameter estimation: impact of CMB aberration
Catena, Riccardo; Notari, Alessio E-mail: notari@ffn.ub.es
2013-04-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a{sub lm}'s via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.
Cosmological parameter estimation: impact of CMB aberration
NASA Astrophysics Data System (ADS)
Catena, Riccardo; Notari, Alessio
2013-04-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles alm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.
Estimation of Seismicity Parameters Using a Computer
NASA Astrophysics Data System (ADS)
Veneziano, Daniele
The book is a translation from an original in Russian, published in 1972. After 15 years, the book appears dated, its emphasis being the use of computers as an innovative technology for seismicity parameter estimation.The book is divided into two parts. Part I (29 pages) reviews the literature for quantitative measures of seismicity and for earthquake recurrence models, and describes previous uses of the computer to determine seismicity parameters. The literature reviewed is mainly that of the 1960s, with prevalence of Russian and European titles. This part of the book may retain some interest for the historical perspective it gives on the subject.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests
Bayesian parameter estimation for effective field theories
NASA Astrophysics Data System (ADS)
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Renal parameter estimates in unrestrained dogs
NASA Technical Reports Server (NTRS)
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
Generalized REGression Package for Nonlinear Parameter Estimation
Energy Science and Technology Software Center (ESTSC)
1995-05-15
GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.
Parameter estimation and optimal experimental design.
Banga, Julio R; Balsa-Canto, Eva
2008-01-01
Mathematical models are central in systems biology and provide new ways to understand the function of biological systems, helping in the generation of novel and testable hypotheses, and supporting a rational framework for possible ways of intervention, like in e.g. genetic engineering, drug development or treatment of diseases. Since the amount and quality of experimental 'omics' data continue to increase rapidly, there is great need for methods for proper model building which can handle this complexity. In the present chapter we review two key steps of the model building process, namely parameter estimation (model calibration) and optimal experimental design. Parameter estimation aims to find the unknown parameters of the model which give the best fit to a set of experimental data. Optimal experimental design aims to devise the dynamic experiments which provide the maximum information content for subsequent non-linear model identification, estimation and/or discrimination. We place emphasis on the need for robust global optimization methods for proper solution of these problems, and we present a motivating example considering a cell signalling model. PMID:18793133
Linear parameter estimation of rational biokinetic functions.
Doeswijk, T G; Keesman, K J
2009-01-01
For rational biokinetic functions such as the Michaelis-Menten equation, in general, a nonlinear least-squares method is a good estimator. However, a major drawback of a nonlinear least-squares estimator is that it can end up in a local minimum. Rearranging and linearizing rational biokinetic functions for parameter estimation is common practice (e.g. Lineweaver-Burk linearization). By rearranging, however, the error is distorted. In addition, the rearranged model frequently leads to a so-called 'errors-in-variables' estimation problem. Applying the ordinary least squares (OLS) method to the linearly reparameterized function ensures a global minimum, but its estimates become biased if the regression variables contain errors and thus bias compensation is needed. Therefore, in this paper, a bias compensated total least squares (CTLS) method, which as OLS is a direct method, is proposed to solve the estimation problem. The applicability of a general linear reparameterization procedure and the advances of CTLS over ordinary least squares and nonlinear least squares approaches are shown by two simulation examples. The examples contain Michaelis-Menten kinetics and enzyme kinetics with substrate inhibition. Furthermore, CTLS is demonstrated with real data of an activated sludge experiment. It is concluded that for rational biokinetic models CTLS is a powerful alternative to the existing least-squares methods. PMID:19004464
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Parameter estimate of signal transduction pathways
Arisi, Ivan; Cattaneo, Antonino; Rosato, Vittorio
2006-01-01
Background The "inverse" problem is related to the determination of unknown causes on the bases of the observation of their effects. This is the opposite of the corresponding "direct" problem, which relates to the prediction of the effects generated by a complete description of some agencies. The solution of an inverse problem entails the construction of a mathematical model and takes the moves from a number of experimental data. In this respect, inverse problems are often ill-conditioned as the amount of experimental conditions available are often insufficient to unambiguously solve the mathematical model. Several approaches to solving inverse problems are possible, both computational and experimental, some of which are mentioned in this article. In this work, we will describe in details the attempt to solve an inverse problem which arose in the study of an intracellular signaling pathway. Results Using the Genetic Algorithm to find the sub-optimal solution to the optimization problem, we have estimated a set of unknown parameters describing a kinetic model of a signaling pathway in the neuronal cell. The model is composed of mass action ordinary differential equations, where the kinetic parameters describe protein-protein interactions, protein synthesis and degradation. The algorithm has been implemented on a parallel platform. Several potential solutions of the problem have been computed, each solution being a set of model parameters. A sub-set of parameters has been selected on the basis on their small coefficient of variation across the ensemble of solutions. Conclusion Despite the lack of sufficiently reliable and homogeneous experimental data, the genetic algorithm approach has allowed to estimate the approximate value of a number of model parameters in a kinetic model of a signaling pathway: these parameters have been assessed to be relevant for the reproduction of the available experimental data. PMID:17118160
Karakus, Mustafa C; Salkever, David S; Slade, Eric P; Ialongo, Nicholas; Stuart, Elizabeth
2012-01-01
The potentially serious adverse impacts of behavior problems during adolescence on employment outcomes in adulthood provide a key economic rationale for early intervention programs. However, the extent to which lower educational attainment accounts for the total impact of adolescent behavior problems on later employment remains unclear As an initial step in exploring this issue, we specify and estimate a recursive bivariate probit model that 1) relates middle school behavior problems to high school graduation and 2) models later employment in young adulthood as a function of these behavior problems and of high school graduation. Our model thus allows for both a direct effect of behavior problems on later employment as well as an indirect effect that operates via graduation from high school. Our empirical results, based on analysis of data from the NELS, suggest that the direct effects of externalizing behavior problems on later employment are not significant but that these problems have important indirect effects operating through high school graduation. PMID:23576834
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of
Thermal Property Parameter Estimation of TPS Materials
NASA Technical Reports Server (NTRS)
Maddren, Jesse
1998-01-01
Accurate knowledge of the thermophysical properties of TPS (thermal protection system) materials is necessary for pre-flight design and post-flight data analysis. Thermal properties, such as thermal conductivity and the volumetric specific heat, can be estimated from transient temperature measurements using non-linear parameter estimation methods. Property values are derived by minimizing a functional of the differences between measured and calculated temperatures. High temperature thermal response testing of TPS materials is usually done in arc-jet or radiant heating facilities which provide a quasi one-dimensional heating environment. Last year, under the NASA-ASEE-Stanford Fellowship Program, my work focused on developing a radiant heating apparatus. This year, I have worked on increasing the fidelity of the experimental measurements, optimizing the experimental procedures and interpreting the data.
Parameter estimation, nonlinearity, and Occam's razor.
Alonso, Leandro M
2015-03-01
Nonlinear systems are capable of displaying complex behavior even if this is the result of a small number of interacting time scales. A widely studied case is when complex dynamics emerges out of a nonlinear system being forced by a simple harmonic function. In order to identify if a recorded time series is the result of a nonlinear system responding to a simpler forcing, we develop a discrete nonlinear transformation for time series based on synchronization techniques. This allows a parameter estimation procedure which simultaneously searches for a good fit of the recorded data, and small complexity of a fluctuating driving parameter. We illustrate this procedure using data from respiratory patterns during birdsong production. PMID:25833426
Parameter estimation, nonlinearity, and Occam's razor
NASA Astrophysics Data System (ADS)
Alonso, Leandro M.
2015-03-01
Nonlinear systems are capable of displaying complex behavior even if this is the result of a small number of interacting time scales. A widely studied case is when complex dynamics emerges out of a nonlinear system being forced by a simple harmonic function. In order to identify if a recorded time series is the result of a nonlinear system responding to a simpler forcing, we develop a discrete nonlinear transformation for time series based on synchronization techniques. This allows a parameter estimation procedure which simultaneously searches for a good fit of the recorded data, and small complexity of a fluctuating driving parameter. We illustrate this procedure using data from respiratory patterns during birdsong production.
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
Parameter estimation techniques for LTP system identification
NASA Astrophysics Data System (ADS)
Nofrarias Serra, Miquel
LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.
Recursive least-squares learning algorithms for neural networks
Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)
1990-01-01
This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.
Parameter Estimation of Spacecraft Fuel Slosh Model
NASA Technical Reports Server (NTRS)
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
Fast cosmological parameter estimation using neural networks
NASA Astrophysics Data System (ADS)
Auld, T.; Bridges, M.; Hobson, M. P.; Gull, S. F.
2007-03-01
We present a method for accelerating the calculation of cosmic microwave background (CMB) power spectra, matter power spectra and likelihood functions for use in cosmological parameter estimation. The algorithm, called COSMONET, is based on training a multilayer perceptron neural network and shares all the advantages of the recently released PICO algorithm of Fendt & Wandelt, but has several additional benefits in terms of simplicity, computational speed, memory requirements and ease of training. We demonstrate the capabilities of COSMONET by computing CMB power spectra over a box in the parameter space of flat Λ cold dark matter (ΛCDM) models containing the 3σ WMAP1-year confidence region. We also use COSMONET to compute the WMAP3-year (WMAP3) likelihood for flat ΛCDM models and show that marginalized posteriors on parameters derived are very similar to those obtained using CAMB and the WMAP3 code. We find that the average error in the power spectra is typically 2-3 per cent of cosmic variance, and that COSMONET is ~7 × 104 faster than CAMB (for flat models) and ~6 × 106 times faster than the official WMAP3 likelihood code. COSMONET and an interface to COSMOMC are publically available at http://www.mrao.cam.ac.uk/software/cosmonet.
NASA Technical Reports Server (NTRS)
Sunahara, Y.; Kojima, F.
1987-01-01
The purpose of this paper is to establish a method for identifying unknown parameters involved in the boundary state of a class of diffusion systems under noisy observations. A mathematical model of the system dynamics is given by a two-dimensional diffusion equation. Noisy observations are made by sensors allocated on the system boundary. Starting with the mathematical model mentioned above, an online parameter estimation algorithm is proposed within the framework of the maximum likelihood estimation. Existence of the optimal solution and related necessary conditions are discussed. By solving a local variation of the cost functional with respect to the perturbation of parameters, the estimation mechanism is proposed in a form of recursive computations. Finally, the feasibility of the estimator proposed here is demonstrated through results of digital simulation experiments.
Statistical cautions when estimating DEBtox parameters.
Billoir, Elise; Delignette-Muller, Marie Laure; Péry, Alexandre R R; Geffard, Olivier; Charles, Sandrine
2008-09-01
DEBtox (Dynamic Energy Budget in toxicology) models have been designed to analyse various results from classic tests in ecotoxicology. They consist of a set of mechanistic models describing how organisms manage their energy, when they are exposed to a contaminant. Until now, such a biology-based modeling approach has not been used within the regulatory context. However, these methods have been promoted and discussed in recent guidance documents on the statistical analysis of ecotoxicity data. Indeed, they help us to understand the underlying mechanisms. In this paper, we focused on the 21 day Daphnia magna reproduction test. We first aimed to clarify and detail the model building process leading to DEBtox models. Equations were rederived step by step, and for some of them we obtained results different from the published ones. Then, we statistically evaluated the estimation process quality when using a least squares approach. Using both experimental and simulated data, our analyses highlighted several statistical issues related to the fitting of DEBtox models on OECD-type reproduction data. In this case, particular attention had to be paid to parameter estimates and the interpretation of their confidence intervals. PMID:18571678
Recursion, Language, and Starlings
ERIC Educational Resources Information Center
Corballis, Michael C.
2007-01-01
It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…
NASA Astrophysics Data System (ADS)
Catena, Riccardo; Notari, Alessio
2013-07-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles alm's via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In ref. [1] we checked the validity of this assumption in parameter estimation for a Planck-like angular resolution, both for a full-sky ideal experiment and also when sky cuts are included to model CMB foreground contaminations with a sky fraction similar to the Planck satellite. The result to this analysis was that aberration and Doppler have a sizable impact on a CMB based parameter estimation. In this erratum we correct an error made in ref. [1] when comparing pseudo angular power spectra computed in the CMB rest frame with the ones measured by a moving observer. Properly comparing the two spectra we find now that although the corrections to the Cl due to aberration and Doppler are larger than the cosmic variance at l > 1000 and potentially important, the resulting bias on the parameters is negligible for Planck.
Parameter Estimation and Data Management System of Sea Clutter
NASA Astrophysics Data System (ADS)
Cong, Bo; Duan, Qingguang; Qu, Yuanxin
2016-02-01
In this paper, a parameter estimation and data management system of sea clutter is described, which can acquire the data of sea clutter, implement parameter estimation and realize real-time communications.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Parameter estimation with Sandage-Loeb test
Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin E-mail: jfzhang@mail.neu.edu.cn
2014-12-01
The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.
Estimation of high altitude Martian dust parameters
NASA Astrophysics Data System (ADS)
Pabari, Jayesh; Bhalodi, Pinali
2016-07-01
Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.
Distinctive signatures of recursion
Martins, Maurício Dias
2012-01-01
Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed. PMID:22688640
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Updated Item Parameter Estimates Using Sparse CAT Data.
ERIC Educational Resources Information Center
Smith, Robert L.; Rizavi, Saba; Paez, Roxanna; Rotou, Ourania
A study was conducted to investigate whether augmenting the calibration of items using computerized adaptive test (CAT) data matrices produced estimates that were unbiased and improved the stability of existing item parameter estimates. Item parameter estimates from four pools of items constructed for operational use were used in the study to…
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Parameter estimation and error analysis in environmental modeling and computation
NASA Technical Reports Server (NTRS)
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography. PMID:27082150
New approaches to estimation of magnetotelluric parameters
Egbert, G.D. . Coll. of Oceanography); Booker, J.R. )
1990-01-01
This document proposed the development and application of some new statistical techniques for improving the collection and analysis of wide-band magnetotelluric (MT) data. The principle goal of our work is to develop and implement fully automatic single station and remote reference impedance estimation schemes which are robust, unbiased and statistically efficient. The initial proposal suggested several extensions to the regression M-estimates to better allow for non-stationary and non-Gaussian noise in both electric and magnetic field channels (measured at one or more simultaneous stations). A second goal of the proposal was to develop formal, reliable procedures for estimating undistorted 2-d strike directions and to develop statistics for assessing the validity of the 2-d assumption that are unaffected by near surface static distortion effects. To test and validate the methods, working with data selected from a series of over 200 wide-band MT sites was proposed. For the current budget period, setting up a data base, and completing development and initial testing of the single station and remote reference methods outlined in the proposal is suggested. 8 refs., 13 figs.
Applications of parameter estimation in the study of spinning airplanes
NASA Technical Reports Server (NTRS)
W Taylor, L., Jr.
1982-01-01
Spinning airplanes offer challenges to estimating dynamic parameters because of the nonlinear nature of the dynamics. In this paper, parameter estimation techniques are applied to spin flight test data for estimating the error in measuring post-stall angles of attack, deriving Euler angles from angular velocity data, and estimating nonlinear aerodynamic characteristics. The value of the scale factor for post-stall angles of attack agrees closely with that obtained from special wind-tunnel tests. The independently derived Euler angles are seen to be valid in spite of steep pitch angles. Estimates of flight derived nonlinear aerodynamic parameters are evaluated in terms of the expected fit error.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Information Gains in Cosmological Parameter Estimation
NASA Astrophysics Data System (ADS)
Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël
2014-05-01
Combining datasets from different experiments and probes to constrain cosmological models is an important challenge in observational cosmology. We summarize a framework for measuring the constraining power and the consistency of separately or jointly analyzed data within a given model that we proposed in earlier work (Seehars et al. 2014). Applying the Kullback-Leibler divergence to posterior distributions, we can quantify the difference between constraints and distinguish contributions from gains in precision and shifts in parameter space. We show results from applying this technique to a combination of datasets and probes such as the cosmic microwave background or baryon acoustic oscillations.
Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1997-01-01
An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.
Schutter, J. de; Bruyninckx, H.; Dutre, S.; Geeter, J. de; Katupitiya, J.; Demey, S.; Lefebvre, T.
1999-12-01
This paper uses (linearized) Kalman filters to estimate first-order geometric parameters (i.e., orientation of contact normals and location of contact points) that occur in force-controlled compliant motions. The time variance of these parameters is also estimated. In addition, transitions between contact situations can be monitored. The contact between the manipulated object and its environment is general, i.e., multiple contacts can occur at the same time, and both the topology and the geometry of each single contact are arbitrary. The two major theoretical contributions are (1) the integration of the general contact model, developed previously by the authors, into a state-space form suitable for recursive processing; and (2) the use of the reciprocity constraint between ideal contact forces and motion freedoms as the measurement equation of the Kalman filter. The theory is illustrated by full 3-D experiments. The approach of this paper allows a breakthrough in the state of the art dominated by the classical, orthogonal contact models of Mason that can only cope with a limited (albeit important) subset of all possible contact situations.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Gravity Field Parameter Estimation Using QR Factorization
NASA Astrophysics Data System (ADS)
Klokocnik, J.; Wagner, C. A.; McAdoo, D.; Kostelecky, J.; Bezdek, A.; Novak, P.; Gruber, C.; Marty, J.; Bruinsma, S. L.; Gratton, S.; Balmino, G.; Baboulin, M.
2007-12-01
This study compares the accuracy of the estimated geopotential coefficients when QR factorization is used instead of the classical method applied at our institute, namely the generation of normal equations that are solved by means of Cholesky decomposition. The objective is to evaluate the gain in numerical precision, which is obtained at considerable extra cost in terms of computer resources. Therefore, a significant increase in precision must be realized in order to justify the additional cost. Numerical simulations were done in order to examine the performance of both solution methods. Reference gravity gradients were simulated, using the EIGEN-GL04C gravity field model to degree and order 300, every 3 seconds along a near-circular, polar orbit at 250 km altitude. The simulation spanned a total of 60 days. A polar orbit was selected in this simulation in order to avoid the 'polar gap' problem, which causes inaccurate estimation of the low-order spherical harmonic coefficients. Regularization is required in that case (e.g., the GOCE mission), which is not the subject of the present study. The simulated gravity gradients, to which white noise was added, were then processed with the GINS software package, applying EIGEN-CG03 as the background gravity field model, followed either by the usual normal equation computation or using the QR approach for incremental linear least squares. The accuracy assessment of the gravity field recovery consists in computing the median error degree-variance spectra, accumulated geoid errors, geoid errors due to individual coefficients, and geoid errors calculated on a global grid. The performance, in terms of memory usage, required disk space, and CPU time, of the QR versus the normal equation approach is also evaluated.
Fuzzy Supernova Templates. II. Parameter Estimation
NASA Astrophysics Data System (ADS)
Rodney, Steven A.; Tonry, John L.
2010-05-01
Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms z = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored ΛCDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.
FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION
Rodney, Steven A.; Tonry, John L. E-mail: jt@ifa.hawaii.ed
2010-05-20
Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms{sub z} = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored {Lambda}CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.
ERIC Educational Resources Information Center
Kolen, Michael J.; Whitney, Douglas R.
The application of latent trait theory to classroom tests necessitates the use of small sample sizes for parameter estimation. Computer generated data were used to assess the accuracy of estimation of the slope and location parameters in the two parameter logistic model with fixed abilities and varying small sample sizes. The maximum likelihood…
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
NASA Astrophysics Data System (ADS)
Lowenthal, Francis
2010-11-01
This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.
2010-05-01
Several recent contributions to the hydrologic literature have demonstrated an inability of standard model evaluation criteria to adequately distinguish between different parameter sets and competing model structures, particulary when dealing with highly complex environmental models and significant structural error. The widespread approach to model evaluation that summarizes the mismatch, En = {ek;k = 1,...,n} = Yn -˜Yn between n model predictions, Yn and corresponding observations, ˜Yn in a single aggregated measure of length of the residuals, F not only introduces equifinality but also complicates parameter estimation. Here we introduce the Differential Evolution Particle Filter (DEPF) to better reconcile models with observations. Our method uses sequential likelihood updating to provide a recursive mapping of {e1,...,en}→ F . As main building block DEPF uses the DREAM adaptive MCMC scheme presented in Vrugt et al. (2008, 2009). Two illustrative case studies using conceptual hydrologic modeling show that DEPF (1) requires far fewer particles than conventional Sequential Monte Carlo approaches to work well in practice, (2) maintains adequate particle diversity during all stages of filter evolution, (3) provides important insights into the information content of discharge data and non-stationarity of hydrologic model parameters, and (4) is embarrassingly parallel and therefore designed to solve computationally demanding hydrologic models. Our DEPF code follows the formal Bayesian paradigm, yet readily accommodates informal likelihood functions or signature indices if those better represent the salient features of the data and simulation model.
Equating Parameter Estimates from the Generalized Graded Unfolding Model.
ERIC Educational Resources Information Center
Roberts, James S.
Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…
Attitudinal Data: Dimensionality and Start Values for Estimating Item Parameters.
ERIC Educational Resources Information Center
Nandakumar, Ratna; Hotchkiss, Larry; Roberts, James S.
The purpose of this study was to assess the dimensionality of attitudinal data arising from unfolding models for discrete data and to compute rough estimates of item and individual parameters for use as starting values in other estimation parameters. One- and two-dimensional simulated test data were analyzed in this study. Results of limited…
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Gianola, Daniel; Sorensen, Daniel
2004-01-01
Multivariate models are of great importance in theoretical and applied quantitative genetics. We extend quantitative genetic theory to accommodate situations in which there is linear feedback or recursiveness between the phenotypes involved in a multivariate system, assuming an infinitesimal, additive, model of inheritance. It is shown that structural parameters defining a simultaneous or recursive system have a bearing on the interpretation of quantitative genetic parameter estimates (e.g., heritability, offspring-parent regression, genetic correlation) when such features are ignored. Matrix representations are given for treating a plethora of feedback-recursive situations. The likelihood function is derived, assuming multivariate normality, and results from econometric theory for parameter identification are adapted to a quantitative genetic setting. A Bayesian treatment with a Markov chain Monte Carlo implementation is suggested for inference and developed. When the system is fully recursive, all conditional posterior distributions are in closed form, so Gibbs sampling is straightforward. If there is feedback, a Metropolis step may be embedded for sampling the structural parameters, since their conditional distributions are unknown. Extensions of the model to discrete random variables and to nonlinear relationships between phenotypes are discussed. PMID:15280252
Complexity analysis and parameter estimation of dynamic metabolic systems.
Tian, Li-Ping; Shi, Zhong-Ke; Wu, Fang-Xiang
2013-01-01
A metabolic system consists of a number of reactions transforming molecules of one kind into another to provide the energy that living cells need. Based on the biochemical reaction principles, dynamic metabolic systems can be modeled by a group of coupled differential equations which consists of parameters, states (concentration of molecules involved), and reaction rates. Reaction rates are typically either polynomials or rational functions in states and constant parameters. As a result, dynamic metabolic systems are a group of differential equations nonlinear and coupled in both parameters and states. Therefore, it is challenging to estimate parameters in complex dynamic metabolic systems. In this paper, we propose a method to analyze the complexity of dynamic metabolic systems for parameter estimation. As a result, the estimation of parameters in dynamic metabolic systems is reduced to the estimation of parameters in a group of decoupled rational functions plus polynomials (which we call improper rational functions) or in polynomials. Furthermore, by taking its special structure of improper rational functions, we develop an efficient algorithm to estimate parameters in improper rational functions. The proposed method is applied to the estimation of parameters in a dynamic metabolic system. The simulation results show the superior performance of the proposed method. PMID:24233242
Lewis, A.A.
1981-11-01
It is the purpose of the present study to indicate the means by which Kramer's results may be generalized to considerations of stronger computing devices than the finite state automata considered in Kramer's approach, and to domains of alternatives having the cardinality of the continuum. The means we employ in the approach makes use of the theory of recursive functions in the context of Church's Thesis. The result, which we consider as a preliminary result to a more general research program, shows that a choice function that is rational in the sense of Richter (not necessarily regular) when defined on a restricted family of subsets of a continuum of alternatives, when recursively represented by a partial predicate on equivalence classes of approximations by rational numbers, is recursively unsolvable. By way of Church's Thesis, therefore, such a function cannot be realized by means of a very general class of effectively computable procedures. An additional consequence that can be derived from the result of recursive unsolvability of rational choice in this setting is the placement of a minimal bound on the amount of computational complexity entailed by effective realizations of rational choice.
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
ERIC Educational Resources Information Center
Banreti, Zoltan
2010-01-01
This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about…
ERIC Educational Resources Information Center
Kemp, Andy
2007-01-01
"Geomlab" is a functional programming language used to describe pictures that are made up of tiles. The beauty of "Geomlab" is that it introduces students to recursion, a very powerful mathematical concept, through a very simple and enticing graphical environment. Alongside the software is a series of eight worksheets which lead into producing…
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Recursive Implementations of the Consider Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; DSouza, Chris
2012-01-01
One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.
How to fool cosmic microwave background parameter estimation
Kinney, William H.
2001-02-15
With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the cosmic microwave background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
On a variational approach to some parameter estimation problems
NASA Technical Reports Server (NTRS)
Banks, H. T.
1985-01-01
Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.
Simultaneous optimal experimental design for in vitro binding parameter estimation.
Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C
2013-10-01
Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Simultaneous estimation of parameters in the bivariate Emax model.
Magnusdottir, Bergrun T; Nyquist, Hans
2015-12-10
In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048
Generalized Limits for Single-Parameter Quantum Estimation
Boixo, Sergio; Flammia, Steven T.; Caves, Carlton M.; Geremia, JM
2007-03-02
We develop generalized bounds for quantum single-parameter estimation problems for which the coupling to the parameter is described by intrinsic multisystem interactions. For a Hamiltonian with k-system parameter-sensitive terms, the quantum limit scales as 1/N{sup k}, where N is the number of systems. These quantum limits remain valid when the Hamiltonian is augmented by any parameter-independent interaction among the systems and when adaptive measurements via parameter-independent coupling to ancillas are allowed.
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
A Fully Conditional Estimation Procedure for Rasch Model Parameters.
ERIC Educational Resources Information Center
Choppin, Bruce
A strategy for overcoming problems with the Rasch model's inability to handle missing data involves a pairwise algorithm which manipulates the data matrix to separate out the information needed for the estimation of item difficulty parameters in a test. The method of estimation compares two or three items at a time, separating out the ability…
Synchronization-based parameter estimation from time series
NASA Astrophysics Data System (ADS)
Parlitz, U.; Junge, L.; Kocarev, L.
1996-12-01
The parameters of a given (chaotic) dynamical model are estimated from scalar time series by adapting a computer model until it synchronizes with the given data. This parameter identification method is applied to numerically generated and experimental data from Chua's circuit.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Cosmological parameter estimation with free-form primordial power spectrum
NASA Astrophysics Data System (ADS)
Hazra, Dhiraj Kumar; Shafieloo, Arman; Souradeep, Tarun
2013-06-01
Constraints on the main cosmological parameters using cosmic microwave background (CMB) or large scale structure data are usually based on the power-law assumption of the primordial power spectrum (PPS). However, in the absence of a preferred model for the early Universe, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed power-law form of PPS. In this paper, for the first time, we perform cosmological parameter estimation allowing the free form of the primordial spectrum. This is in fact the most general approach to estimate cosmological parameters without assuming any particular form for the primordial spectrum. We use a direct reconstruction of the PPS for any point in the cosmological parameter space using the recently modified Richardson-Lucy algorithm; however, other alternative reconstruction methods could be used for this purpose as well. We use WMAP 9 year data in our analysis considering the CMB lensing effect, and we report, for the first time, that the flat spatial universe with no cosmological constant is ruled out by more than a 4σ confidence limit without assuming any particular form of the primordial spectrum. This would be probably the most robust indication for dark energy using CMB data alone. Our results on the estimated cosmological parameters show that higher values of the baryonic and matter density and a lower value of the Hubble parameter (in comparison to the estimated values by assuming power-law PPS) is preferred by the data. However, the estimated cosmological parameters by assuming a free form of the PPS have an overlap at 1σ confidence level with the estimated values assuming the power-law form of PPS.
Global parameter estimation methods for stochastic biochemical systems
2010-01-01
Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Analyzing and constraining signaling networks: parameter estimation for the user.
Geier, Florian; Fengos, Georgios; Felizzi, Federico; Iber, Dagmar
2012-01-01
The behavior of most dynamical models not only depends on the wiring but also on the kind and strength of interactions which are reflected in the parameter values of the model. The predictive value of mathematical models therefore critically hinges on the quality of the parameter estimates. Constraining a dynamical model by an appropriate parameterization follows a 3-step process. In an initial step, it is important to evaluate the sensitivity of the parameters of the model with respect to the model output of interest. This analysis points at the identifiability of model parameters and can guide the design of experiments. In the second step, the actual fitting needs to be carried out. This step requires special care as, on the one hand, noisy as well as partial observations can corrupt the identification of system parameters. On the other hand, the solution of the dynamical system usually depends in a highly nonlinear fashion on its parameters and, as a consequence, parameter estimation procedures get easily trapped in local optima. Therefore any useful parameter estimation procedure has to be robust and efficient with respect to both challenges. In the final step, it is important to access the validity of the optimized model. A number of reviews have been published on the subject. A good, nontechnical overview is provided by Jaqaman and Danuser (Nat Rev Mol Cell Biol 7(11):813-819, 2006) and a classical introduction, focussing on the algorithmic side, is given in Press (Numerical recipes: The art of scientific computing, Cambridge University Press, 3rd edn., 2007, Chapters 10 and 15). We will focus on the practical issues related to parameter estimation and use a model of the TGFβ-signaling pathway as an educative example. Corresponding parameter estimation software and models based on MATLAB code can be downloaded from the authors's web page ( http://www.bsse.ethz.ch/cobi ). PMID:23361979
Parameter estimation on gravitational waves from multiple coalescing binaries
Mandel, Ilya
2010-04-15
Future ground-based and space-borne interferometric gravitational-wave detectors may capture between tens and thousands of binary coalescence events per year. There is a significant and growing body of work on the estimation of astrophysically relevant parameters, such as masses and spins, from the gravitational-wave signature of a single event. This paper introduces a robust Bayesian framework for combining the parameter estimates for multiple events into a parameter distribution of the underlying event population. The framework can be readily deployed as a rapid post-processing tool.
Projection filters for modal parameter estimate for flexible structures
NASA Technical Reports Server (NTRS)
Huang, Jen-Kuang; Chen, Chung-Wen
1987-01-01
Single-mode projection filters are developed for eigensystem parameter estimates from both analytical results and test data. Explicit formulations of these projection filters are derived using the pseudoinverse matrices of the controllability and observability matrices in general use. A global minimum optimization algorithm is developed to update the filter parameters by using interval analysis method. Modal parameters can be attracted and updated in the global sense within a specific region by passing the experimental data through the projection filters. For illustration of this method, a numerical example is shown by using a one-dimensional global optimization algorithm to estimate model frequencies and dampings.
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
LIKELIHOOD OF THE POWER SPECTRUM IN COSMOLOGICAL PARAMETER ESTIMATION
Sun, Lei; Wang, Qiao; Zhan, Hu
2013-11-01
The likelihood function is a crucial element of parameter estimation. In analyses of galaxy overdensities and weak lensing shear, one often approximates the likelihood of the power spectrum with a Gaussian distribution. The posterior probability derived from such a likelihood deviates considerably from the exact posterior on the largest scales probed by any survey, where the central limit theorem does not apply. We show that various forms of Gaussian likelihoods can have a significant impact on the estimation of the primordial non-Gaussianity parameter f{sub NL} from the galaxy angular power spectrum. The Gaussian plus log-normal likelihood, which has been applied successfully in analyses of the cosmic microwave background, outperforms the Gaussian likelihoods. Nevertheless, even if the exact likelihood of the power spectrum is used, the estimated parameters may be still biased. As such, the likelihoods and estimators need to be thoroughly examined for potential systematic errors.
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R.; Wade, J.G.
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Simultaneous parameter and state estimation of shear buildings
NASA Astrophysics Data System (ADS)
Concha, Antonio; Alvarez-Icaza, Luis; Garrido, Rubén
2016-03-01
This paper proposes an adaptive observer that simultaneously estimates the damping/mass and stiffness/mass ratios, and the state of a seismically excited building. The adaptive observer uses only acceleration measurements of the ground and floors for both parameter and state estimation; it identifies all the parameter ratios, velocities and displacements of the structure if all the floors are instrumented; and it also estimates the state and the damping/mass and stiffness/mass ratios of a reduced model of the building if only some floors are equipped with accelerometers. This observer does not resort to any particular canonical form and employs the Least Squares (LS) algorithm and a Luenberger state estimator. The LS method is combined with a smooth parameter projection technique that provides only positive estimates, which are employed by the state estimator. Boundedness of the estimate produced by the LS algorithm does not depend on the boundedness of the state estimates. Moreover, the LS method uses a parametrization based on Linear Integral Filters that eliminate offsets in the acceleration measurements in finite time and attenuate high-frequency measurement noise. Experimental results obtained using a reduced-scale five-story confirm the effectiveness of the proposed adaptive observer.
Bayesian auxiliary particle filters for estimating neural tuning parameters.
Mountney, John; Sobel, Marc; Obeid, Iyad
2009-01-01
A common challenge in neural engineering is to track the dynamic parameters of neural tuning functions. This work introduces the application of Bayesian auxiliary particle filters for this purpose. Based on Monte-Carlo filtering, Bayesian auxiliary particle filters use adaptive methods to model the prior densities of the state parameters being tracked. The observations used are the neural firing times, modeled here as a Poisson process, and the biological driving signal. The Bayesian auxiliary particle filter was evaluated by simultaneously tracking the three parameters of a hippocampal place cell and compared to a stochastic state point process filter. It is shown that Bayesian auxiliary particle filters are substantially more accurate and robust than alternative methods of state parameter estimation. The effects of time-averaging on parameter estimation are also evaluated. PMID:19963911
Estimation of Accumulation Parameters for Urban Runoff Quality Modeling
NASA Astrophysics Data System (ADS)
Alley, William M.; Smith, Peter E.
1981-12-01
Many recently developed watershed models utilize accumulation and washoff equations to simulate the quality of runofffrom urban impervious areas. These models often have been calibrated by trial and error and with little understanding of model sensitivity to the various parameters. Methodologies for estimating best fit values of the washoff parameters commonly used in these models have been presented previously. In this paper, parameter identification techniques for estimating the accumulation parameters from measured runoff quality data are presented along with a sensitivity analysis of the parameters. Results from application of the techniques and the sensitivity analysis suggest a need for data quantifying the magnitude and identifying the shape of constituent accumulation curves. An exponential accumulation curve is shown to be more general than the linear accumulation curves used in most urban runoff quality models. When determining accumulation rates, attention needs to be given to the effects of residual amounts of constituents remaining after the previous period of storm runoff or street sweeping.
Single-tone parameter estimation from discrete-time observations
NASA Technical Reports Server (NTRS)
Rife, D. C.; Boorstyn, R. R.
1974-01-01
Estimation of the parameters of a single-frequency complex tone from a finite number of noisy discrete-time observations is discussed. The appropriate Cramer-Rao bounds and maximum-likelihood (ML) estimation algorithms are derived. Some properties of the ML estimators are proved. The relationship of ML estimation to the discrete Fourier transform is exploited to obtain practial algorithms. The threshold effect of one algorithm is analyzed and compared to simulation results. Other simulation results verify other aspects of the analysis.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets. PMID:22680545
Assumptions of the primordial spectrum and cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Shafieloo, Arman; Souradeep, Tarun
2011-10-01
The observables of the perturbed universe, cosmic microwave background (CMB) anisotropy and large structures depend on a set of cosmological parameters, as well as the assumed nature of primordial perturbations. In particular, the shape of the primordial power spectrum (PPS) is, at best, a well-motivated assumption. It is known that the assumed functional form of the PPS in cosmological parameter estimation can affect the best-fit-parameters and their relative confidence limits. In this paper, we demonstrate that a specific assumed form actually drives the best-fit parameters into distinct basins of likelihood in the space of cosmological parameters where the likelihood resists improvement via modifications to the PPS. The regions where considerably better likelihoods are obtained allowing free-form PPS lie outside these basins. In the absence of a preferred model of inflation, this raises a concern that current cosmological parameter estimates are strongly prejudiced by the assumed form of PPS. Our results strongly motivate approaches toward simultaneous estimation of the cosmological parameters and the shape of the primordial spectrum from upcoming cosmological data. It is equally important for theorists to keep an open mind towards early universe scenarios that produce features in the PPS.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Compaction parameter estimation using surface movement data in Southern Flevoland
NASA Astrophysics Data System (ADS)
Fokker, P. A.; Gunnink, J.; de Lange, G.; Leeuwenburgh, O.; van der Veer, E. F.
2015-11-01
The Southern part of the Flevopolder has shown considerable subsidence since its reclamation in 1967. We have set up an integrated method to use subsidence data, water level data and forward models for compaction, oxidation and the resulting subsidence to estimate the driving parameters. Our procedure, an Ensemble Smoother with Multiple Data Assimilation, is very fast and gives insight into the variability of the estimated parameters and the correlations between them. We used two forward models: the Koppejan model and the Bjerrum model. In first instance, the Bjerrum model seems to perform better than the Koppejan model. This must, however, be corroborated with more elaborate parameter estimation exercises in which in particular the water level development is taken into account.
Estimation of Dynamical Parameters in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark O.
2004-01-01
In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.
Least-squares estimation of batch culture kinetic parameters.
Ong, S L
1983-10-01
This article concerns the development of a simple and effective least-squares procedure for estimating the kinetic parameters in Monod expressions from batch culture data. The basic approach employed in this work was to translate the problem of parameter estimation to a mathematical model containing a single decision variable. The resulting model was then solved by an efficient one-dimensional search algorithm which can be adapted to any microcomputer or advanced programmable calculator. The procedure was tested on synthetic data (substrate concentrations) with different types and levels of error. The effect of endogeneous respiration on the estimated values of the kinetic parameters was also assessed. From the results of these analyses the least-squares procedure developed was concluded to be very effective. PMID:18548565
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
AMT-200S Motor Glider Parameter and Performance Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2011-01-01
Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
ERIC Educational Resources Information Center
Zickar, Michael J.; Ury, Karen L.
2002-01-01
Attempted to relate content features of personality items to item parameter estimates from the partial credit model of E. Muraki (1990) by administering the Adjective Checklist (L. Goldberg, 1992) to 329 undergraduates. As predicted, the discrimination parameter was related to the item subtlety ratings of personality items but the level of word…
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Inversion of canopy reflectance models for estimation of vegetation parameters
NASA Technical Reports Server (NTRS)
Goel, Narendra S.
1987-01-01
One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.
Maximum likelihood estimation for distributed parameter models of flexible spacecraft
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Williams, J. L.
1989-01-01
A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.
Chaos synchronization and parameter estimation from a scalar output signal.
Chen, Maoyin; Kurths, Jürgen
2007-08-01
We propose an observer-based approach for chaos synchronization and parameter estimation from a scalar output signal. To begin with, we use geometric control to transform the master system into a standard form with zero dynamics. Then we construct a slaver to synchronize with the master using a combination of slide mode control and linear feedback control. Within a finite time, partial synchronization is realized, which further results in complete synchronization as time tends to infinity. Even if there exists model uncertainty in the slaver, we can also estimate the unknown model parameter by a simple adaptive rule. PMID:17930180
Estimation of the elastic Earth parameters from the SLR technique
NASA Astrophysics Data System (ADS)
Rutkowska, Milena
ABSTRACT. The global elastic parameters (Love and Shida numbers) associated with the tide variations for satellite and stations are estimated from the Satellite Laser Ranging (SLR) data. The study is based on satellite observations taken by the global network of the ground stations during the period from January 1, 2005 until January 1, 2007 for monthly orbital arcs of Lageos 1 satellite. The observation equations contain unknown for orbital arcs, some constants and elastic Earth parameters which describe tide variations. The adjusted values are discussed and compared with geophysical estimations of Love numbers. All computations were performed employing the NASA software GEODYN II (eddy et al. 1990).
Estimation of effective hydrogeological parameters in heterogeneous and anisotropic aquifers
NASA Astrophysics Data System (ADS)
Lin, Hsien-Tsung; Tan, Yih-Chi; Chen, Chu-Hui; Yu, Hwa-Lung; Wu, Shih-Ching; Ke, Kai-Yuan
2010-07-01
SummaryObtaining reasonable hydrological input parameters is a key challenge in groundwater modeling. Analysis of temporal evolution during pump-induced drawdown is one common approach used to estimate the effective transmissivity and storage coefficients in a heterogeneous aquifer. In this study, we propose a Modified Tabu search Method (MTM), an improvement drawn from an alliance between the Tabu Search (TS) and the Adjoint State Method (ASM) developed by Tan et al. (2008). The latter is employed to estimate effective parameters for anisotropic, heterogeneous aquifers. MTM is validated by several numerical pumping tests. Comparisons are made to other well-known techniques, such as the type-curve method (TCM) and the straight-line method (SLM), to provide insight into the challenge of determining the most effective parameter for an anisotropic, heterogeneous aquifer. The results reveal that MTM can efficiently obtain the best representative and effective aquifer parameters in terms of the least mean square errors of the drawdown estimations. The use of MTM may involve less artificial errors than occur with TCM and SLM, and lead to better solutions. Therefore, effective transmissivity is more likely to be comprised of the geometric mean of all transmissivities within the cone of depression based on a precise estimation of MTM. Further investigation into the applicability of MTM shows that a higher level of heterogeneity in an aquifer can induce an uncertainty in estimations, while the changes in correlation length will affect the accuracy of MTM only once the degree of heterogeneity has also risen.
Adjustment of Sensor Locations During Thermal Property Parameter Estimation
NASA Technical Reports Server (NTRS)
Milos, Frank S.; Marschall, Jochen; Rasky, Daniel J. (Technical Monitor)
1996-01-01
The temperature dependent thermal properties of a material may be evaluated from transient temperature histories using nonlinear parameter estimation techniques. The usual approach is to minimize the sum of the squared errors between measured and calculated temperatures at specific locations in the body. Temperature measurements are usually made with thermocouples and it is customary to take thermocouple locations as known and fixed during parameter estimation computations. In fact, thermocouple locations are never known exactly. Location errors on the order of the thermocouple wire diameter are intrinsic to most common instrumentation procedures (e.g., inserting a thermocouple into a drilled hole) and additional errors can be expected for delicate materials, difficult installations, large thermocouple beads, etc.. Thermocouple location errors are especially significant when estimating thermal properties of low diffusively materials which can sustain large temperature gradients during testing. In the present work, a parameter estimation formulation is presented which allows for the direct inclusion of thermocouple positions into the primary parameter estimation procedure. It is straightforward to set bounds on thermocouple locations which exclude non-physical locations and are consistent with installation tolerances. Furthermore, bounds may be tightened to an extent consistent with any independent verification of thermocouple location, such as x-raying, and so the procedure is entirely consonant with experimental information. A mathematical outline of the procedure is given and its implementation is illustrated through numerical examples characteristic of light-weight, high-temperature ceramic insulation during transient heating. The efficacy and the errors associated with the procedure are discussed.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Parameter variability estimation using stochastic response surface model updating
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2014-12-01
From a practical point of view, uncertainties existing in structural parameters and measurements must be handled in order to provide reliable structural condition evaluations. At this moment, deterministic model updating loses its practicability and a stochastic updating procedure should be employed seeking for statistical properties of parameters and responses. Presently this topic has not been well investigated on account of its greater complexity in theoretical configuration and difficulty in inverse problem solutions after involving uncertainty analyses. Due to it, this paper attempts to develop a stochastic model updating method for parameter variability estimation. Uncertain parameters and responses are correlated through stochastic response surface models, which are actually explicit polynomial chaos expansions based on Hermite polynomials. Then by establishing a stochastic inverse problem, parameter means and standard deviations are updated in a separate and successive way. For the purposes of problem simplification and optimization efficiency, in each updating iteration stochastic response surface models are reconstructed to avoid the construction and analysis of sensitivity matrices. Meanwhile, in the interest of investigating the effects of parameter variability on responses, a parameter sensitivity analysis method has been developed based on the derivation of polynomial chaos expansions. Lastly the feasibility and reliability of the proposed methods have been validated using a numerical beam and then a set of nominally identical metal plates. After comparing with a perturbation method, it is found that the proposed method can estimate parameter variability with satisfactory accuracy and the complexity of the inverse problem can be highly reduced resulting in cost-efficient optimization.
Parameter estimation for the Euler-Bernoulli-beam
NASA Technical Reports Server (NTRS)
Graif, E.; Kunisch, K.
1984-01-01
An approximation involving cubic spline functions for parameter estimation problems in the Euler-Bernoulli-beam equation (phrased as an optimization problem with respect to the parameters) is described and convergence is proved. The resulting algorithm was implemented and several of the test examples are documented. It is observed that the use of penalty terms in the cost functional can improve the rate of convergence.
Human ECG signal parameters estimation during controlled physical activity
NASA Astrophysics Data System (ADS)
Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz
2015-09-01
ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
Recursive Objects--An Object Oriented Presentation of Recursion
ERIC Educational Resources Information Center
Sher, David B.
2004-01-01
Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…
SCoPE: an efficient method of Cosmological Parameter Estimation
Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in
2014-07-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.
Simunek, J.; Nimmo, J.R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.
Mean-Field Analysis of Recursive Entropic Segmentation of Biological Sequences
NASA Astrophysics Data System (ADS)
Cheong, Siew-Ann; Stodghill, Paul; Schneider, David; Myers, Christopher
2007-03-01
Horizontal gene transfer in bacteria results in genomic sequences which are mosaic in nature. An important first step in the analysis of a bacterial genome would thus be to model the statistically nonstationary nucleotide or protein sequence with a collection of P stationary Markov chains, and partition the sequence of length N into M statistically stationary segments/domains. This can be done for Markov chains of order K = 0 using a recursive segmentation scheme based on the Jensen-Shannon divergence, where the unknown parameters P and M are estimated from a hypothesis testing/model selection process. In this talk, we describe how the Jensen-Shannon divergence can be generalized to Markov chains of order K > 0, as well as an algorithm optimizing the positions of a fixed number of domain walls. We then describe a mean field analysis of the generalized recursive Jensen-Shannon segmentation scheme, and show how most domain walls appear as local maxima in the divergence spectrum of the sequence, before highlighting the main problem associated with the recursive segmentation scheme, i.e. the strengths of the domain walls selected recursively do not decrease monotonically. This problem is especially severe in repetitive sequences, whose statistical signatures we will also discuss.
A parameter estimation framework for patient-specific hemodynamic computations
NASA Astrophysics Data System (ADS)
Itu, Lucian; Sharma, Puneet; Passerini, Tiziano; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin
2015-01-01
We propose a fully automated parameter estimation framework for performing patient-specific hemodynamic computations in arterial models. To determine the personalized values of the windkessel models, which are used as part of the geometrical multiscale circulation model, a parameter estimation problem is formulated. Clinical measurements of pressure and/or flow-rate are imposed as constraints to formulate a nonlinear system of equations, whose fixed point solution is sought. A key feature of the proposed method is a warm-start to the optimization procedure, with better initial solution for the nonlinear system of equations, to reduce the number of iterations needed for the calibration of the geometrical multiscale models. To achieve these goals, the initial solution, computed with a lumped parameter model, is adapted before solving the parameter estimation problem for the geometrical multiscale circulation model: the resistance and the compliance of the circulation model are estimated and compensated. The proposed framework is evaluated on a patient-specific aortic model, a full body arterial model, and multiple idealized anatomical models representing different arterial segments. For each case it leads to the best performance in terms of number of iterations required for the computational model to be in close agreement with the clinical measurements.
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Online vegetation parameter estimation using passive microwave remote sensing observations
Technology Transfer Automated Retrieval System (TEKTRAN)
In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...
Loss of Information in Estimating Item Parameters in Incomplete Designs
ERIC Educational Resources Information Center
Eggen, Theo J. H. M.; Verelst, Norman D.
2006-01-01
In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…
Estimability of Parameters in the Generalized Graded Unfolding Model.
ERIC Educational Resources Information Center
Roberts, James S.; Donoghue, John R.; Laughlin, James E.
The generalized graded unfolding model (GGUM) (J. Roberts, J. Donoghue, and J. Laughlin, 1998) is an item response theory model designed to analyze binary or graded responses that are based on a proximity relation. The purpose of this study was to assess conditions under which item parameter estimation accuracy increases or decreases, with special…
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
NASA Astrophysics Data System (ADS)
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Matched filtering and parameter estimation of ringdown waveforms
Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglia, Marco
2007-11-15
Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to {approx}10{sup 3}M{sub {center_dot}} out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than {approx}10{sup 6} templates would be needed for a single-stage multimode search. Therefore, we recommend a ''two-stage'' search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.
Inverse estimation of parameters for an estuarine eutrophication model
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.
A parameter identifiability and estimation study in Yesilirmak River.
Berber, R; Yuceer, M; Karadurmus, E
2009-01-01
Water quality models have relatively large number of parameters, which need to be estimated against observed data through a non-trivial task that is associated with substantial difficulties. This work involves a systematic model calibration and validation study for river water quality. The model considered was composed of dynamic mass balances for eleven pollution constituents, stemming from QUAL2E water quality model by considering a river segment as a series of continuous stirred-tank reactors (CSTRs). Parameter identifiability was analyzed from the perspective of sensitivity measure and collinearity index, which indicated that 8 parameters would fall within the identifiability range. The model parameters were then estimated by an integration based optimization algorithm coupled with sequential quadratic programming. Dynamic field data consisting of major pollutant concentrations were collected from sampling stations along Yesilirmak River around the city of Amasya in Turkey, and compared with model predictions. The calibrated model responses were in good agreement with the observed river water quality data, and this indicated that the suggested procedure provided an effective means for reliable estimation of model parameters and dynamic simulation for river streams. PMID:19214006
Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images
NASA Astrophysics Data System (ADS)
Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.
2016-04-01
Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.
Parameter estimation of an air-bearing suspended test table
NASA Astrophysics Data System (ADS)
Fu, Zhenxian; Lin, Yurong; Liu, Yang; Chen, Xinglin; Chen, Fang
2015-02-01
A parameter estimation approach is proposed for parameter determination of a 3-axis air-bearing suspended test table. The table is to provide a balanced and frictionless environment for spacecraft ground test. To balance the suspension, the mechanical parameters of the table, including its angular inertias and centroid deviation from its rotating center, have to be determined first. Then sliding masses on the table can be adjusted by stepper motors to relocate the centroid of the table to its rotating center. Using the angular momentum theorem and the coriolis theorem, dynamic equations are derived describing the rotation of the table under the influence of gravity imbalance torque and activating torques. To generate the actuating torques, use of momentum wheels is proposed, whose virtue is that no active control is required to the momentum wheels, which merely have to spin at constant rates, thus avoiding the singularity problem and the difficulty of precisely adjusting the output torques, issues associated with control moment gyros. The gyroscopic torques generated by the momentum wheels, as they are forced by the table to precess, are sufficient to activate the table for parameter estimation. Then least-square estimation is be employed to calculate the desired parameters. The effectiveness of the method is validated by simulation.
Estimation of uncertain material parameters using modal test data
Veers, P.S.; Laird, D.L.; Carne, T.G.; Sagartz, M.J.
1997-11-01
Analytical models of wind turbine blades have many uncertainties, particularly with composite construction where material properties and cross-sectional dimension may not be known or precisely controllable. In this paper the authors demonstrate how modal testing can be used to estimate important material parameters and to update and improve a finite-element (FE) model of a prototype wind turbine blade. An example of prototype blade is used here to demonstrate how model parameters can be identified. The starting point is an FE model of the blade, using best estimates for the material constants. Frequencies of the lowest fourteen modes are used as the basis for comparisons between model predictions and test data. Natural frequencies and mode shapes calculated with the FE model are used in an optimal test design code to select instrumentation (accelerometer) and excitation locations that capture all the desired mode shapes. The FE model is also used to calculate sensitivities of the modal frequencies to each of the uncertain material parameters. These parameters are estimated, or updated, using a weighted least-squares technique to minimize the difference between test frequencies and predicted results. Updated material properties are determined for axial, transverse, and shear moduli in two separate regions of the blade cross section: in the central box, and in the leading and trailing panels. Static FE analyses are then conducted with the updated material parameters to determine changes in effective beam stiffness and buckling loads.
[Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].
Lu, Yu; Li, Xiang-Ru; Yang, Tan
2014-11-01
It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex
ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and
Estimation of the sea surface's two-scale backscatter parameters
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1978-01-01
The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Modal parameters estimation using ant colony optimisation algorithm
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2016-08-01
The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.
Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.
Boonvisut, Pasu; Cavuşoğlu, M Cenk
2013-10-01
Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented. PMID:24031160
Ocean optics estimation for absorption, backscattering, and phase function parameters.
Hakim, Ammar H; McCormick, Norman J
2003-02-20
We propose and test an inverse ocean optics procedure with numerically simulated data for the determination of inherent optical properties using in-water radiance measurements. If data are available at only one depth within a deep homogeneous water layer, then the single-scattering albedo and the single parameter that characterizes the Henyey-Greenstein phase function can be estimated. If data are available at two depths, then these two parameters can be determined along with the optical thickness so that the absorption and scattering coefficients, and also the backscattering coefficient, can be estimated. With a knowledge of these parameters, the albedo and Lambertian fraction of reflected radiance of the bottom can be determined if measurements are made close to the bottom. A simplified method for determining the optical properties of the water also is developed for only three irradiance-type measurements if the radiance is approximately in the asymptotic regime. PMID:12617207
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
NASA Astrophysics Data System (ADS)
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-01
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times. PMID:27448871
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. PMID:23726942
Recursive Feature Extraction in Graphs
Energy Science and Technology Software Center (ESTSC)
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
English, Anthony E; Moy, Alan B; Kruse, Kara L; Ward, Richard C; Kirkpatrick, Stacy S; GoldmanM.D., Mitchell H
2009-04-01
A novel transcellular micro-impedance biosensor, referred to as the electric cell-substrate impedance sensor or ECIS, has become increasingly applied to the study and quantification of endothelial cell physiology. In principle, frequency dependent impedance measurements obtained from this sensor can be used to estimate the cell cell and cell matrix impedance components of endothelial cell barrier function based on simple geometric models. Few studies, however, have examined the numerical optimization of these barrier function parameters and established their error bounds. This study, therefore, illustrates the implementation of a multi-response Levenberg Marquardt algorithm that includes instrumental noise estimates and applies it to frequency dependent porcine pulmonary artery endothelial cell impedance measurements. The stability of cell cell, cell matrix and membrane impedance parameter estimates based on this approach is carefully examined, and several forms of parameter instability and refinement illustrated. Including frequency dependent noise variance estimates in the numerical optimization reduced the parameter value dependence on the frequency range of measured impedances. The increased stability provided by a multi-response non-linear fit over one-dimensional algorithms indicated that both real and imaginary data should be used in the parameter optimization. Error estimates based on single fits and Monte Carlo simulations showed that the model barrier parameters were often highly correlated with each other. Independently resolving the different parameters can, therefore, present a challenge to the experimentalist and demand the use of non-linear multivariate statistical methods when comparing different sets of parameters.
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Estimation of Cometary Rotation Parameters Based on Camera Images
NASA Technical Reports Server (NTRS)
Spindler, Karlheinz
2007-01-01
The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.
[Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].
Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng
2015-11-01
We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively. PMID:26978937
NASA Astrophysics Data System (ADS)
Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy
2016-04-01
Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then
Hopfield neural networks for on-line parameter estimation.
Alonso, Hugo; Mendonça, Teresa; Rocha, Paula
2009-05-01
This paper addresses the problem of using Hopfield Neural Networks (HNNs) for on-line parameter estimation. As presented here, a HNN is a nonautonomous nonlinear dynamical system able to produce a time-evolving estimate of the actual parameterization. The stability analysis of the HNN is carried out under more general assumptions than those previously considered in the literature, yielding a weaker sufficient condition under which the estimation error asymptotically converges to zero. Furthermore, a robustness analysis is made, showing that, under the presence of perturbations, the estimation error converges to a bounded neighbourhood of zero, whose size decreases with the size of the perturbations. The results obtained are illustrated by means of two case studies, where the HNN is compared with two other methods. PMID:19386467
Recursive graphs with small-world scale-free properties
NASA Astrophysics Data System (ADS)
Comellas, Francesc; Fertin, Guillaume; Raspaud, André
2004-03-01
We discuss a category of graphs, recursive clique trees, which have small-world and scale-free properties and allow a fine tuning of the clustering and the power-law exponent of their discrete degree distribution. We determine relevant characteristics of those graphs: the diameter, degree distribution, and clustering parameter. The graphs have also an interesting recursive property, and generalize recent constructions with fixed degree distributions.
Seafloor elastic parameters estimation based on AVO inversion
NASA Astrophysics Data System (ADS)
Liu, Yangting; Liu, Xuewei
2015-12-01
Seafloor elastic parameters play an important role in many fields as diverse as marine construction, seabed resources exploration and seafloor acoustics. In order to estimate seafloor elastic parameters, we perform AVO inversion with seafloor reflected seismic data. As a particular reflection interface, the seafloor reflector does not support S-waves and the elastic parameters change dramatically across it. Conventional approximations to the Zoeppritz equations are not applicable for the seafloor situation. In this paper, we perform AVO inversion with the exact Zoeppritz equations through an unconstrained optimization method. Our synthetic study proves that the inversion method does not show strong dependence on the initial model for both unconsolidated and semi-consolidated seabed situations. The inversion uncertainty of the elastic parameters increases with the noise level, and decreases with the incidence angle range. Finally, we perform inversion of data from the South China Sea, and obtain satisfactory results, which are in good agreement with previous research.
Bayesian hemodynamic parameter estimation by bolus tracking perfusion weighted imaging.
Boutelier, Timothé; Kudo, Koshuke; Pautot, Fabrice; Sasaki, Makoto
2012-07-01
A delay-insensitive probabilistic method for estimating hemodynamic parameters, delays, theoretical residue functions, and concentration time curves by computed tomography (CT) and magnetic resonance (MR) perfusion weighted imaging is presented. Only a mild stationarity hypothesis is made beyond the standard perfusion model. New microvascular parameters with simple hemodynamic interpretation are naturally introduced. Simulations on standard digital phantoms show that the method outperforms the oscillating singular value decomposition (oSVD) method in terms of goodness-of-fit, linearity, statistical and systematic errors on all parameters, especially at low signal-to-noise ratios (SNRs). Delay is always estimated sharply with user-supplied resolution and is purely arterial, by contrast to oSVD time-to-maximum TMAX that is very noisy and biased by mean transit time (MTT), blood volume, and SNR. Residue functions and signals estimates do not suffer overfitting anymore. One CT acute stroke case confirms simulation results and highlights the ability of the method to reliably estimate MTT when SNR is low. Delays look promising for delineating the arterial occlusion territory and collateral circulation. PMID:22410325
Anisotropic parameter estimation using velocity variation with offset analysis
Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A.
2013-09-09
Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.
Informed spectral analysis: audio signal parameter estimation using side information
NASA Astrophysics Data System (ADS)
Fourer, Dominique; Marchand, Sylvain
2013-12-01
Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J., Jr.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
NASA Astrophysics Data System (ADS)
Ollongren, Alexander
2011-02-01
In a sequence of papers on the topic of message construction for interstellar communication by means of a cosmic language, the present author has discussed various significant requirements such a lingua should satisfy. The author's Lingua Cosmica is a (meta) system for annotating contents of possibly large-scale messages for ETI. LINCOS, based on formal constructive logic, was primarily designed for dealing with logic contents of messages but is also applicable for denoting structural properties of more general abstractions embedded in such messages. The present paper explains ways and means for achieving this for a special case: recursive entities. As usual two stages are involved: first the domain of discourse is enriched with suitable representations of the entities concerned, after which properties over them can be dealt with within the system itself. As a representative example the case of Russian dolls (Matrjoshka's) is discussed in some detail and relations with linguistic structures in natural languages are briefly exploited.
Estimation of atmospheric parameters from time-lapse imagery
NASA Astrophysics Data System (ADS)
McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.
2016-05-01
A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the
Estimation of economic parameters of U.S. hydropower resources
Hall, Douglas G.; Hunt, Richard T.; Reeves, Kelly S.; Carroll, Greg R.
2003-06-01
Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes. PMID:25040235
Iterative procedure for camera parameters estimation using extrinsic matrix decomposition
NASA Astrophysics Data System (ADS)
Goshin, Yegor V.; Fursov, Vladimir A.
2016-03-01
This paper addresses the problem of 3D scene reconstruction in cases when the extrinsic parameters (rotation and translation) of the camera are unknown. This problem is both important and urgent because the accuracy of the camera parameters significantly influences the resulting 3D model. A common approach is to determine the fundamental matrix from corresponding points on two views of a scene and then to use singular value decomposition for camera projection matrix estimation. However, this common approach is very sensitive to fundamental matrix errors. In this paper we propose a novel approach in which camera parameters are determined directly from the equations of the projective transformation by using corresponding points on the views. The proposed decomposition allows us to use an iterative procedure for determining the parameters of the camera. This procedure is implemented in two steps: the translation determination and the rotation determination. The experimental results of the camera parameters estimation and 3D scene reconstruction demonstrate the reliability of the proposed approach.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Estimation of Geodetic and Geodynamical Parameters with VieVS
NASA Technical Reports Server (NTRS)
Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald
2010-01-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.
A Bayesian approach to parameter estimation in HIV dynamical models.
Putter, H; Heisterkamp, S H; Lange, J M A; de Wolf, F
2002-08-15
In the context of a mathematical model describing HIV infection, we discuss a Bayesian modelling approach to a non-linear random effects estimation problem. The model and the data exhibit a number of features that make the use of an ordinary non-linear mixed effects model intractable: (i) the data are from two compartments fitted simultaneously against the implicit numerical solution of a system of ordinary differential equations; (ii) data from one compartment are subject to censoring; (iii) random effects for one variable are assumed to be from a beta distribution. We show how the Bayesian framework can be exploited by incorporating prior knowledge on some of the parameters, and by combining the posterior distributions of the parameters to obtain estimates of quantities of interest that follow from the postulated model. PMID:12210633
On Using Exponential Parameter Estimators with an Adaptive Controller
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
Identification of vehicle parameters and estimation of vertical forces
NASA Astrophysics Data System (ADS)
Imine, H.; Fridman, L.; Madani, T.
2015-12-01
The aim of the present work is to estimate the vertical forces and to identify the unknown dynamic parameters of a vehicle using the sliding mode observers approach. The estimation of vertical forces needs a good knowledge of dynamic parameters such as damping coefficient, spring stiffness and unsprung masses, etc. In this paper, suspension stiffness and unsprung masses have been identified by the Least Square Method. Real-time tests have been carried out on an instrumented static vehicle, excited vertically by hydraulic jacks. The vehicle is equipped with different sensors in order to measure its dynamics. The measurements coming from these sensors have been considered as unknown inputs of the system. However, only the roll angle and the suspension deflection measurements have been used in order to perform the observer. Experimental results are presented and discussed to show the quality of the proposed approach.
CosmoSIS: A System for MC Parameter Estimation
Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
CTER-rapid estimation of CTF parameters with error assessment.
Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. PMID:24562077
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
On optimal detection and estimation of the FCN parameters
NASA Astrophysics Data System (ADS)
Yatskiv, Y.
2009-09-01
Statistical approach for detection and estimation of parameters of short-term quasi- periodic processes was used in order to investigate the Free Core Nutation (FCN) signal in the Celestial Pole Offset (CPO). The results show that this signal is very unstable and that it disappeared in year 2000. The amplitude of oscillation with period of about 435 days is larger for dX as compared with that for dY .
CTER—Rapid estimation of CTF parameters with error assessment
Penczek, Pawel A.; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M.T.
2014-01-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. PMID:24562077
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Estimation of stellar atmospheric parameters from SDSS/SEGUE spectra
NASA Astrophysics Data System (ADS)
Re Fiorentin, P.; Bailer-Jones, C. A. L.; Lee, Y. S.; Beers, T. C.; Sivarani, T.; Wilhelm, R.; Allende Prieto, C.; Norris, J. E.
2007-06-01
We present techniques for the estimation of stellar atmospheric parameters (T_eff, log~g, [Fe/H]) for stars from the SDSS/SEGUE survey. The atmospheric parameters are derived from the observed medium-resolution (R = 2000) stellar spectra using non-linear regression models trained either on (1) pre-classified observed data or (2) synthetic stellar spectra. In the first case we use our models to automate and generalize parametrization produced by a preliminary version of the SDSS/SEGUE Spectroscopic Parameter Pipeline (SSPP). In the second case we directly model the mapping between synthetic spectra (derived from Kurucz model atmospheres) and the atmospheric parameters, independently of any intermediate estimates. After training, we apply our models to various samples of SDSS spectra to derive atmospheric parameters, and compare our results with those obtained previously by the SSPP for the same samples. We obtain consistency between the two approaches, with RMS deviations on the order of 150 K in T_eff, 0.35 dex in log~g, and 0.22 dex in [Fe/H]. The models are applied to pre-processed spectra, either via Principal Component Analysis (PCA) or a Wavelength Range Selection (WRS) method, which employs a subset of the full 3850-9000Å spectral range. This is both for computational reasons (robustness and speed), and because it delivers higher accuracy (better generalization of what the models have learned). Broadly speaking, the PCA is demonstrated to deliver more accurate atmospheric parameters when the training data are the actual SDSS spectra with previously estimated parameters, whereas WRS appears superior for the estimation of log~g via synthetic templates, especially for lower signal-to-noise spectra. From a subsample of some 19 000 stars with previous determinations of the atmospheric parameters, the accuracies of our predictions (mean absolute errors) for each parameter are T_eff to 170/170 K, log~g to 0.36/0.45 dex, and [Fe/H] to 0.19/0.26 dex, for methods (1
Estimating hydraulic parameters when poroelastic effects are significant.
Berg, Steven J; Hsieh, Paul A; Illman, Walter A
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. PMID:21204832
High speed parameter estimation for a homogenized energy model
NASA Astrophysics Data System (ADS)
Ernstberger, Jon M.
Industrial, commercial, military, biomedical, and civilian uses of smart materials are increasingly investigated for high performance applications. These compounds couple applied field or thermal energy to mechanical forces that are generated within the material. The devices utilizing these compounds are often much smaller than their traditional counterparts and provide greater design capabilities and energy efficiency. The relations that couple field and mechanical energies are often hysteretic and nonlinear. To accurately control devices employing these compounds, models must quantify these effects. Further, since these compounds exhibit environment-dependent behavior, the models must be robust for accurate actuator quantification. In this dissertation, we investigate the construction of models that characterize these internal mechanisms and that manifest themselves in material deformation in a hysteretic fashion. Results of previously-presented model formulations are given. New techniques for generating model components are presented which reduce the computational load for parameter estimations. The use of various deterministic and stochastic search algorithms for parameter estimation are discussed with strengths and weaknesses of each examined. New end-user graphical tools for properly initiating the parameter estimation are also presented. Finally, results from model fits to data from ferroelectric---e.g., Lead Zirconate Titanate (PZT)---and ferromagnetic---e.g., Terfenol-D---materials are presented.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Estimating stellar atmospheric parameters based on Lasso features
NASA Astrophysics Data System (ADS)
Liu, Chuan-Xing; Zhang, Pei-Ai; Lu, Yu
2014-04-01
With the rapid development of large scale sky surveys like the Sloan Digital Sky Survey (SDSS), GAIA and LAMOST (Guoshoujing telescope), stellar spectra can be obtained on an ever-increasing scale. Therefore, it is necessary to estimate stellar atmospheric parameters such as Teff, log g and [Fe/H] automatically to achieve the scientific goals and make full use of the potential value of these observations. Feature selection plays a key role in the automatic measurement of atmospheric parameters. We propose to use the least absolute shrinkage selection operator (Lasso) algorithm to select features from stellar spectra. Feature selection can reduce redundancy in spectra, alleviate the influence of noise, improve calculation speed and enhance the robustness of the estimation system. Based on the extracted features, stellar atmospheric parameters are estimated by the support vector regression model. Three typical schemes are evaluated on spectral data from both the ELODIE library and SDSS. Experimental results show the potential performance to a certain degree. In addition, results show that our method is stable when applied to different spectra.
Hydraulic parameters estimation from well logging resistivity and geoelectrical measurements
NASA Astrophysics Data System (ADS)
Perdomo, S.; Ainchil, J. E.; Kruse, E.
2014-06-01
In this paper, a methodology is suggested for deriving hydraulic parameters, such as hydraulic conductivity or transmissivity combining classical hydrogeological data with geophysical measurements. Estimates values of transmissivity and conductivity, with this approach, can reduce uncertainties in numerical model calibration and improve data coverage, reducing time and cost of a hydrogeological investigation at a regional scale. The conventional estimation of hydrogeological parameters needs to be done by analyzing wells data or laboratory measurements. Furthermore, to make a regional survey many wells should be considered, and the location of each one plays an important role in the interpretation stage. For this reason, the use of geoelectrical methods arises as an effective complementary technique, especially in developing countries where it is necessary to optimize resources. By combining hydraulic parameters from pumping tests and electrical resistivity from well logging profiles, it was possible to adjust three empirical laws in a semi-confined alluvial aquifer in the northeast of the province of Buenos Aires (Argentina). These relations were also tested to be used with surficial geoelectrical data. The hydraulic conductivity and transmissivity estimated in porous material were according to expected values for the region (20 m/day; 457 m2/day), and are very consistent with previous results from other authors (25 m/day and 500 m2/day). The methodology described could be used with similar data sets and applied to other areas with similar hydrogeological conditions.
Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2007-12-01
This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Rapid estimation of high-parameter auditory-filter shapes.
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M
2014-10-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880
Spacecraft design impacts on the post-Newtonian parameter estimation
NASA Astrophysics Data System (ADS)
Schuster, Anja Katharina; et al.
2015-08-01
The ESA mission BepiColombo, reaching out to explore the elusive planet Mercury, features unprecedented tracking techniques. The highly precise orbit determination around Mercury is a compelling opportunity for a modern test of General Relativity (GR). Using the software tool GRETCHEN incorporating the Square Root Information Filter (SRIF), MPO's orbit is simulated and the post-Newtonian parameters (PNP) are estimated. In this work, the influence of a specific constraint of the Mercury Orbiter Radio science Experiment (MORE) on the achievable accuracy of the PNP estimates is investigated. The power system design of the spacecraft requires that ±35° around perihelion the Ka transponder needs to be switched off, thus radiometric data is only gathered via X band. This analysis shows the impact of this constraint on the achievable accuracy of PNP estimates. On a bigger scale, if GR shows some violation at a detectable level it inevitably leads to its invalidation.
Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation
NASA Astrophysics Data System (ADS)
Fowler, J. W.
2014-08-01
Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.
Parameter Estimation in a Delay Differential Model of ENSO
NASA Astrophysics Data System (ADS)
Roux, J.; Gerchinovitz, S.; Ghil, M.
2009-04-01
In this talk, we present very generic statistical methods to perform parameter estimation in a Delay Differential Equation. Our reference DDE is the toy model of El Nino/Southern Oscillation introduced by Ghil, Zaliapin and Thompson (2008). We first recall some properties of this model in comparison with other models, together with basic results in Functional Differential Equation theory. We then briefly describe two statistical estimation procedures (the very classic Ordinary Least Squares estimator computed via simulated annealing, and a new two stage method based on nonparametric regression using the Nadaraya-Watson kernel). We finally comment on the numerical tests we performed on simulated noised data. These results encourage further application of this kind of methods to more complex (and more realistic) models of ENSO, to other problems in the Geosciences or to other fields.
Estimating demographic parameters using hidden process dynamic models.
Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger
2012-12-01
Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable. PMID:25763948
Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling
NASA Astrophysics Data System (ADS)
Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-01
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.
2015-01-01
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could be used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.
2015-01-01
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
Estimates of genetic parameters for fat yield in Murrah buffaloes
Kumar, Manoj; Vohra, Vikas; Ratwan, Poonam; Valsalan, Jamuna; Patil, C. S.; Chakravarty, A. K.
2016-01-01
Aim: The present study was performed to investigate the effect of genetic and non-genetic factors affecting milk fat yield and to estimate genetic parameters of monthly test day fat yields (MTDFY) and lactation 305-day fat yield (L305FY) in Murrah buffaloes. Materials and Methods: The data on total of 10381 MTDFY records comprising the first four lactations of 470 Murrah buffaloes calved from 1993 to 2014 were assessed. These buffaloes were sired by 75 bulls maintained in an organized farm at ICAR-National Dairy Research Institute, Karnal. Least squares maximum likelihood program was used to estimate genetic and non-genetic parameters. Heritability estimates were obtained using paternal half-sib correlation method. Genetic and phenotypic correlations among MTDFY, and 305-day fat yield were calculated from the analysis of variance and covariance matrix among sire groups. Results: The overall least squares mean of L305FY was found to be 175.74±4.12 kg. The least squares mean of overall MTDFY ranged from 3.33±0.14 kg (TD-11) to 7.06±0.17 kg (TD-3). The h2 estimate of L305FY was found to be 0.33±0.16 in this study. The estimates of phenotypic and genetic correlations between 305-day fat yield and different MTDFY ranged from 0.32 to 0.48 and 0.51 to 0.99, respectively. Conclusions: In this study, all the genetic and non-genetic factors except age at the first calving group, significantly affected the traits under study. The estimates of phenotypic and genetic correlations of MTDFY with 305-day fat yield was generally higher in the MTDFY-5 of lactation suggesting that this TD yields could be used as the selection criteria for early evaluation and selection of Murrah buffaloes. PMID:27057114
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Sediment load estimation using statistical distributions with streamflow dependent parameters
NASA Astrophysics Data System (ADS)
Mailhot, A.; Rousseau, A. N.; Talbot, G.; Quilbé, R.
2005-12-01
The classical approaches to estimate sediment and chemical loads are all deterministic: averaging methods, ratio estimators, regression methods (rating curves) and planning level load estimation methods. However, none of these methods is satisfactory since they are often inaccurate and do not take into account nor quantify uncertainty. To fill this gap, statistical methods have to be investigated. This presentation proposes a new statistical method in which sediment concentration is assimilated to a random variable and is described by distribution functions. Three types of distributions are considered: Log-Normal, Gamma and Weibull distributions. Correlation between sediment concentrations and streamflows is integrated to the model by assuming that distribution parameters (mean and coefficient of variation) are related to streamflow using several different functional forms: exponential, quadratic and power law forms for the mean, constant and linear for the coefficient of variation. Parameter estimation is realized through maximization of the likelihood function. This approach is applied on a data set (1989 to 2004) from the Beaurivage River (Quebec, Canada) with weekly to monthly sampling for sediment concentration. A comparison of different models (selection of a distribution function with functional forms relating the mean and the coefficient of variation to streamflow) shows that the Log-Normal distribution with power law mean and coefficient of variation independent of streamflow provides the best result. When comparing annual load results with those obtained using deterministic methods, we observe that ratio estimators values are rarely within the [0.1, 0.9] quantile interval. For the 1997-2004 period, ratio estimator values are almost systematically smaller than the 0.1 quantile. This could presumably be due to the small number of sediment concentration samples for these years. This study suggests that, if deterministic methods such as the ratio estimator
Robust Bayesian estimation of nonlinear parameters on SE(3) Lie group
NASA Astrophysics Data System (ADS)
Kuehnel, Frank O.
2004-11-01
The basic challenge in autonomous robotic exploration is to safely interact with natural environments. An essential part of that challenge is 3D map building. In robotics research this problem is addressed as simultaneous localization and mapping (SLAM). In computer vision it is termed structure from motion (SFM). The common underlying problem is the accurate estimation of the camera pose. Uncertainty information about the pose estimates is essential for a recursive inference scheme. We show that the pose parametrization plays an important role for the finite parametric representation. In the case of sparse observations (weak evidence) the full exponential Lie Cartan coordinates of 1.st kind are most suitable, when assuming a Gaussian noise model on the measurements. Further, we address the pose estimation from a sequence of images and introduce the marginalized MAP estimator, which is numerically more stable and efficient than the joint estimate (bundle-adjustment) used in computer vision.
Estimation of Eruption Source Parameters from Plume Growth Rate
NASA Astrophysics Data System (ADS)
Pouget, Solene; Bursik, Marcus; Webley, Peter; Dehn, Jon; Pavalonis, Michael; Singh, Tarunraj; Singla, Puneet; Patra, Abani; Pitman, Bruce; Stefanescu, Ramona; Madankan, Reza; Morton, Donald; Jones, Matthew
2013-04-01
The eruption of Eyjafjallajokull, Iceland in April and May, 2010, brought to light the hazards of airborne volcanic ash and the importance of Volcanic Ash Transport and Dispersion models (VATD) to estimate the concentration of ash with time. These models require Eruption Source Parameters (ESP) as input, which typically include information about the plume height, the mass eruption rate, the duration of the eruption and the particle size distribution. However much of the time these ESP are unknown or poorly known a priori. We show that the mass eruption rate can be estimated from the downwind plume or umbrella cloud growth rate. A simple version of the continuity equation can be applied to the growth of either an umbrella cloud or the downwind plume. The continuity equation coupled with the momentum equation using only inertial and gravitational terms provides another model. Numerical modeling or scaling relationships can be used, as necessary, to provide values for unknown or unavailable parameters. Use of these models applied to data on plume geometry provided by satellite imagery allows for direct estimation of plume volumetric and mass growth with time. To test our methodology, we compared our results with five well-studied and well-characterized historical eruptions: Mount St. Helens, 1980; Pinatubo, 1991, Redoubt, 1990; Hekla, 2000 and Eyjafjallajokull, 2010. These tests show that the methodologies yield results comparable to or better than currently accepted methodologies of ESP estimation. We then applied the methodology to umbrella clouds produced by the eruptions of Okmok, 12 July 2008, and Sarychev Peak, 12 June 2009, and to the downwind plume produced by the eruptions of Hekla, 2000; Kliuchevsko'i, 1 October 1994; Kasatochi 7-8 August 2008 and Bezymianny, 1 September 2012. The new methods allow a fast, remote assessment of the mass eruption rate, even for remote volcanoes. They thus provide an additional path to estimation of the ESP and the forecasting
NEWBOX: A computer program for parameter estimation in diffusion problems
Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S. )
1989-01-01
In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques
Recursive adaptive frame integration limited
NASA Astrophysics Data System (ADS)
Rafailov, Michael K.
2006-05-01
Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.
Recursion relations from soft theorems
NASA Astrophysics Data System (ADS)
Luo, Hui; Wen, Congkao
2016-03-01
We establish a set of new on-shell recursion relations for amplitudes satisfying soft theorems. The recursion relations can apply to those amplitudes whose additional physical inputs from soft theorems are enough to overcome the bad large- z behaviour. This work is a generalization of the recursion relations recently obtained by Cheung et al. for amplitudes in scalar effective field theories with enhanced vanishing soft behaviours, which can be regarded as a special case of those with non-vanishing soft limits. We apply the recursion relations to tree-level amplitudes in various theories, including amplitudes in the Akulov-Volkov theory and amplitudes containing dilatons of spontaneously-broken conformal symmetry.
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. PMID:23579098
Efficient material parameters estimation with terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Ahmed, Osman S.; Swillam, Mohamed A.; Bakr, Mohamed H.; Li, Xun
2011-02-01
Existing parameter extraction techniques in the terahertz range utilize the magnitude and phase of the transmission function at different frequencies. The number of unknowns is larger than the number of available information creating a nonuniqueness problem. The estimation of the material thickness thus suffers from inaccuracies. We propose a novel optimization technique for the estimation of material refractive index in the terahertz frequency range. The algorithm is applied for materials with arbitrary frequency dependence. Dispersive dielectric models are embedded for accurate parameter extraction of a sample with unknown thickness. Instead of solving N expensive nonlinear optimization problems with different possible material thickness, our technique obtains the optimal material thickness by solving only one optimization problem. The solution of the utilized optimization problem is accelerated by estimating both the first order derivatives (gradient) and second order derivatives (Hessian) of the objective function and supplying them to the optimizer. Our approach has been successfully illustrated through a number of examples with different dispersive models. The examples include the characterization of carbon nanotubes. The technique has also been successfully applied to materials characterized by the Cole-Cole, Debye, and Lorentz models.
Trapping phenomenon of the parameter estimation in asymptotic quantum states
NASA Astrophysics Data System (ADS)
Berrada, K.
2016-09-01
In this paper, we study in detail the behavior of the precision of the parameter estimation in open quantum systems using the quantum Fisher information (QFI). In particular, we study the sensitivity of the estimation on a two-qubit system evolving under Kossakowski-type quantum dynamical semigroups of completely positive maps. In such an environment, the precision of the estimation can even persist asymptotically for different effects of the initial parameters. We find that the QFI can be resistant to the action of the environment with respect to the initial asymptotic states, and it can persist even in the asymptotic long-time regime. In addition, our results provide further evidence that the initial pure and separable mixed states of the input state may enhance quantum metrology. These features make quantum states in this kind of environment a good candidate for the implementation of different schemes of quantum optics and information with high precision. Finally, we show that this quantity may be proposed to detect the amount of the total quantum information that the whole state contains with respect to projective measurements.
Temporal parameters estimation for wheelchair propulsion using wearable sensors.
Ojeda, Manoela; Ding, Dan
2014-01-01
Due to lower limb paralysis, individuals with spinal cord injury (SCI) rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMART(Wheel). The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMART(Wheel). Mean absolute errors (MAE) and mean absolute percentage of error (MAPE) were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion. PMID:25105133
Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.
Spatial dependence clusters in the estimation of forest structural parameters
NASA Astrophysics Data System (ADS)
Wulder, Michael Albert
1999-12-01
In this thesis we provide a summary of the methods by which remote sensing may be applied in forestry, while also acknowledging the various limitations which are faced. The application of spatial statistics to high spatial resolution imagery is explored as a means of increasing the information which may be extracted from digital images. A number of high spatial resolution optical remote sensing satellites that are soon to be launched will increase the availability of imagery for the monitoring of forest structure. This technological advancement is timely as current forest management practices have been altered to reflect the need for sustainable ecosystem level management. The low accuracy level at which forest structural parameters have been estimated in the past is partly due to low image spatial resolution. A large pixel is often composed of a number of surface features, resulting in a spectral value which is due to the reflectance characteristics of all surface features within that pixel. In the case of small pixels, a portion of a surface feature may be represented by a single pixel. When a single pixel represents a portion of a surface object, the potential to isolate distinct surface features exists. Spatial statistics, such as the Gets statistic, provide for an image processing method to isolate distinct surface features. In this thesis, high spatial resolution imagery sensed over a forested landscape is processed with spatial statistics to combine distinct image objects into clusters, representing individual or groups of trees. Tree clusters are a means to deal with the inevitable foliage overlap which occurs within complex mixed and deciduous forest stands. The generation of image objects, that is, clusters, is necessary to deal with the presence of spectrally mixed pixels. The ability to estimate forest inventory and biophysical parameters from image clusters generated from spatially dependent image features is tested in this thesis. The inventory
Hopf algebras and topological recursion
NASA Astrophysics Data System (ADS)
Esteves, João N.
2015-11-01
We consider a model for topological recursion based on the Hopf algebra of planar binary trees defined by Loday and Ronco (1998 Adv. Math. 139 293-309 We show that extending this Hopf algebra by identifying pairs of nearest neighbor leaves, and thus producing graphs with loops, we obtain the full recursion formula discovered by Eynard and Orantin (2007 Commun. Number Theory Phys. 1 347-452).
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
Parameter Estimation for a Model of Space-Time Rainfall
NASA Astrophysics Data System (ADS)
Smith, James A.; Karr, Alan F.
1985-08-01
In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.
Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.
Earth-moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1975-01-01
A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.
On Spectral Classification and Astrophysical Parameter Estimation for Galactic Surveys
NASA Astrophysics Data System (ADS)
Re Fiorentin, Paola; Bailer-Jones, Coryn A. L.; Beers, Timothy C.; Zwitter, Tomaž
2008-12-01
We present several strategies that are being developed in order to classify and parameterize individual stars observed by Galactic surveys, and illustrate some results obtained from spectra obtained by the RAdial Velocity Experiment (RAVE) and the Sloan Digital Sky Survey (SDSS/SEGUE). We demonstrate the efficiency of our models for discrete source classification and stellar atmospheric parameter estimation (effective temperature, surface gravity, and metallicity), which use supervised machine learning algorithms along with a principal component analysis front-end compression phase that also enables knowledge discovery.
Error estimates and specification parameters for functional renormalization
Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Estimation of Modal Parameters Using a Wavelet-Based Approach
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Haley, Sidney M.
1997-01-01
Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.
Source parameter estimation in inhomogeneous volume conductors of arbitrary shape.
Oostendorp, T F; van Oosterom, A
1989-03-01
In this paper it is demonstrated that the use of a direct matrix inverse in the solution of the forward problem in volume conduction problems greatly facilitates the application of standard, nonlinear parameter estimation procedures for finding the strength as well as the location of current sources inside an inhomogeneous volume conductor of arbitrary shape from potential measurements at the outer surface (inverse procedure). This, in turn, facilitates the inclusion of a priori constraints. Where possible, the performance of the method is compared to that of the Gabor-Nelson method. Applications are in the fields of bioelectricity (e.g., electrocardiography and electroencephalography). PMID:2921073
Parameter estimation using NOON states over a relativistic quantum channel
NASA Astrophysics Data System (ADS)
Hosler, Dominic; Kok, Pieter
2013-11-01
We study the effect of the acceleration of the observer on a parameter estimation protocol using NOON states. An inertial observer, Alice, prepares a NOON state in Unruh modes of the quantum field, and sends it to an accelerated observer, Rob. We calculate the quantum Fisher information of the state received by Rob. We find the counterintuitive result that the single-rail encoding outperforms the dual rail. The NOON states have an optimal N for the maximum information extractable by Rob, given his acceleration. This optimal N decreases with increasing acceleration.
Estimation of modal parameters using bilinear joint time frequency distributions
NASA Astrophysics Data System (ADS)
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Statistical parameter estimation in ultrasound backscattering from tissue mimicking media
Chen, J.F.
1994-12-31
Several tissue characterization parameters, including the effective scatterer number density and the backscatter coefficient, were derived from the statistical properties of ultrasonic echo signals. The effective scatterer number density is the actual scatterer number density in a medium multiplied by a frequency-dependent factor that depends on the differential scattering cross-sections of all scatterers. The method described in this thesis for determining the scatterer number density explicitly retains both the temporal nature of the data acquisition and the properties of the ultrasound field in the data reduction. Moreover, it accounts for the possibility that different sets of scatterers may dominate the echo signal at different frequencies. The random processes involved in forming ultrasound echo signals from random media give rise to an uncertainty in the estimated effective scatterer number density. This uncertainty is evaluated using error propagation. The statistical uncertainty depends on the effective number of scatterers contributing to the segmented echo signal, increasing when the effective number of scatterers increases. Tests of the scatterer number density data reduction method and the statistical uncertainty estimator were done using phantoms with known ultrasound scattering properties. Good agreement was found between measured values and those calculated from first-principles. The properties of the non-Gaussian and non-Rayleigh parameters of ultrasound echo signals are also studied. Both parameters depend on the measurement system, including the transducer field and pulse frequency content, as well as on the medium`s properties. The latter is expressed in terms of the scatterer number density and the second and fourth moments of the medium`s scattering function. A simple relationship between the non-Gaussian and non-Rayleigh parameters is derived and verified experimentally.
NASA Astrophysics Data System (ADS)
Yong, Kilyuk; Jo, Sujang; Bang, Hyochoong
This paper presents a modified Rodrigues parameter (MRP)-based nonlinear observer design to estimate bias, scale factor and misalignment of gyroscope measurements. A Lyapunov stability analysis is carried out for the nonlinear observer. Simulation is performed and results are presented illustrating the performance of the proposed nonlinear observer under the condition of persistent excitation maneuver. In addition, a comparison between the nonlinear observer and alignment Kalman filter (AKF) is made to highlight favorable features of the nonlinear observer.
Reduced order parameter estimation using quasilinearization and quadratic programming
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Putti, Mario; Yeh, William W.-G.
2012-06-01
The ability of a particular model to accurately predict how a system responds to forcing is predicated on various model parameters that must be appropriately identified. There are many algorithms whose purpose is to solve this inverse problem, which is often computationally intensive. In this study, we propose a new algorithm that significantly reduces the computational burden associated with parameter identification. The algorithm is an extension of the quasilinearization approach where the governing system of differential equations is linearized with respect to the parameters. The resulting inverse problem therefore becomes a linear regression or quadratic programming problem (QP) for minimizing the sum of squared residuals; the solution becomes an update on the parameter set. This process of linearization and regression is repeated until convergence takes place. This algorithm has not received much attention, as the QPs can become quite large, often infeasible for real-world systems. To alleviate this drawback, proper orthogonal decomposition is applied to reduce the size of the linearized model, thereby reducing the computational burden of solving each QP. In fact, this study shows that the snapshots need only be calculated once at the very beginning of the algorithm, after which no further calculations of the reduced-model subspace are required. The proposed algorithm therefore only requires one linearized full-model run per parameter at the first iteration followed by a series of reduced-order QPs. The method is applied to a groundwater model with about 30,000 computation nodes where as many as 15 zones of hydraulic conductivity are estimated.
Technology Transfer Automated Retrieval System (TEKTRAN)
Vegetation affects the ability to estimate soil moisture from passive microwave observations by attenuating the surface soil moisture signal. To use radiobrightness observations in land data assimilation a vegetation opacity parameter is required as input to a radiative transfer model, which maps su...
Periodic orbits of hybrid systems and parameter estimation via AD.
Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard
2004-07-01
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Estimation of genetic parameters for reproductive traits in alpacas.
Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P
2015-12-01
One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. PMID:26490188
Estimating Parameters of Aquifer Heterogeneity from Transient Pumping Test
NASA Astrophysics Data System (ADS)
Zech, Alraune; Müller, Sebastian; Attinger, Sabine
2015-04-01
We present a new method for interpreting drawdowns of transient pumping tests in heterogeneous porous media. The vast majority of natural aquifers are characterized by heterogeneity, which can be statistically represented by parameters such as geometric mean bar K, variance σ^2, and correlation length ℓ of hydraulic conductivity. Our method can be understood as extension of the effective well flow method [Zech et al., 2012] from steady state to transient pumping tests. It allows a direct parameter estimation of bar K, σ^2, ℓ from head measurements under well flow conditions. The method is based on a representative description of hydraulic conductivity for radial flow regimes K_CG. It was derived previously using the upscaling procedure Radial Coarse Graining in combination with log-normal hydraulic conductivity. A semi-analytical solution for the mean drawdown of transient pumping tests was derived by combining the upscaled solution for the radially adapted hydraulic conductivity K_CG and the groundwater flow equation under well flow conditions. The dependency of the drawdown solution on the statistical quantities of the porous medium allows us to inversely estimate bar K, σ^2, ℓ from pumping test data. We used an ensemble of transient pumping test simulations to verify the drawdown solution. We generated pumping tests in heterogeneous media for various values of the statistical parameters bar K, σ^2, ℓ and evaluated their impact on the drawdown behavior as well as on the temporal evolution. We further examined the impact of several aspects like the location of an observation well or the local conductivity at the pumping well on the drawdown behavior. Zech, A., C. L. Schneider, and S. Attinger, 2012, The Extended Thiem's solution: Including the impact of heterogeneity, Water Resour. Res., 48, W10535, doi:10.1029/2012WR011852.
Learn-as-you-go acceleration of cosmological parameter estimates
NASA Astrophysics Data System (ADS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
Statistical Parameter Estimation in Ultrasound Backscattering from Tissue Mimicking Media.
NASA Astrophysics Data System (ADS)
Chen, Jian-Feng
Several tissue characterization parameters, including the effective scatterer number density and the backscatter coefficient, were derived from the statistical properties of ultrasonic echo signals. The effective scatterer number density is the actual scatterer number density in a medium multiplied by a frequency-dependent factor that depends on the differential scattering cross-sections of all scatterers. The method described in this thesis for determining the scatterer number density explicitly retains both the temporal nature of the data acquisition and the properties of the ultrasound field in the data reduction. Moreover, it accounts for the possibility that different sets of scatterers may dominate the echo signal at different frequencies. The random processes involved in forming ultrasound echo signals from random media give rise to an uncertainty in the estimated effective scatterer number density. This uncertainty is evaluated using error propagation. The statistical uncertainty depends on the effective number of scatterers contributing to the segmented echo signal, increasing when the effective number of scatterers increases. Tests of the scatterer number density data reduction method and the statistical uncertainty estimator were done using phantoms with known ultrasound scattering properties. Good agreement was found between measured values and those calculated from first-principles. The properties of the non-Gaussian and non-Rayleigh parameters of ultrasound echo signals are also studied. Both parameters depend on the measurement system, including the transducer field and pulse frequency content, as well as on the medium's properties. The latter is expressed in terms of the scatterer number density and the second and fourth moments of the medium's scattering function. A simple relationship between the non-Gaussian and non-Rayleigh parameters is derived and verified experimentally. Finally, a reference phantom method is proposed for measuring the
Automatic parameter estimation for atmospheric turbulence mitigation techniques
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron; Kelmelis, Eric
2015-05-01
Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm's underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried's parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.
ERIC Educational Resources Information Center
Shoemaker, David M.
Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)
Estimation of Wheat Agronomic Parameters using New Spectral Indices
Jin, Xiu-liang; Diao, Wan-ying; Xiao, Chun-hua; Wang, Fang-yong; Chen, Bing; Wang, Ke-ru; Li, Shao-kun
2013-01-01
Crop agronomic parameters (leaf area index (LAI), nitrogen (N) uptake, total chlorophyll (Chl) content ) are very important for the prediction of crop growth. The objective of this experiment was to investigate whether the wheat LAI, N uptake, and total Chl content could be accurately predicted using spectral indices collected at different stages of wheat growth. Firstly, the product of the optimized soil-adjusted vegetation index and wheat biomass dry weight (OSAVI×BDW) were used to estimate LAI, N uptake, and total Chl content; secondly, BDW was replaced by spectral indices to establish new spectral indices (OSAVI×OSAVI, OSAVI×SIPI, OSAVI×CIred edge, OSAVI×CIgreen mode and OSAVI×EVI2); finally, we used the new spectral indices for estimating LAI, N uptake, and total Chl content. The results showed that the new spectral indices could be used to accurately estimate LAI, N uptake, and total Chl content. The highest R2 and the lowest RMSEs were 0.711 and 0.78 (OSAVI×EVI2), 0.785 and 3.98 g/m2 (OSAVI×CIred edge) and 0.846 and 0.65 g/m2 (OSAVI×CIred edge) for LAI, nitrogen uptake and total Chl content, respectively. The new spectral indices performed better than the OSAVI alone, and the problems of a lack of sensitivity at earlier growth stages and saturation at later growth stages, which are typically associated with the OSAVI, were improved. The overall results indicated that this new spectral indices provided the best approximation for the estimation of agronomic indices for all growth stages of wheat. PMID:24023639
Excitations for Rapidly Estimating Flight-Control Parameters
NASA Technical Reports Server (NTRS)
Moes, Tim; Smith, Mark; Morelli, Gene
2006-01-01
A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the
US-based Drug Cost Parameter Estimation for Economic Evaluations
Levy, Joseph F; Meek, Patrick D; Rosenberg, Marjorie A
2014-01-01
Introduction In the US, more than 10% of national health expenditures are for prescription drugs. Assessing drug costs in US economic evaluation studies is not consistent, as the true acquisition cost of a drug is not known by decision modelers. Current US practice focuses on identifying one reasonable drug cost and imposing some distributional assumption to assess uncertainty. Methods We propose a set of Rules based on current pharmacy practice that account for the heterogeneity of drug product costs. The set of products derived from our Rules, and their associated costs, form an empirical distribution that can be used for more realistic sensitivity analyses, and create transparency in drug cost parameter computation. The Rules specify an algorithmic process to select clinically equivalent drug products that reduce pill burden, use an appropriate package size, and assume uniform weighting of substitutable products. Three diverse examples show derived empirical distributions and are compared with previously reported cost estimates. Results The shapes of the empirical distributions among the three drugs differ dramatically, including multiple modes and different variation. Previously published estimates differed from the means of the empirical distributions. Published ranges for sensitivity analyses did not cover the ranges of the empirical distributions. In one example using lisinopril, the empirical mean cost of substitutable products was $444 (range $23–$953) as compared to a published estimate of $305 (range $51–$523). Conclusions Our Rules create a simple and transparent approach to create cost estimates of drug products and assess their variability. The approach is easily modified to include a subset of, or different weighting for, substitutable products. The derived empirical distribution is easily incorporated into one-way or probabilistic sensitivity analyses. PMID:25532826
NASA Astrophysics Data System (ADS)
Coskun, Orhan
For ≥10-Gbit/s bit rates that are transmitted over ≥100 km, it is essential that chromatic The traditional method of sending a training signal to identify a channel, followed by data, may be viewed as a simple code for the unknown channel. Results in blind sequence detection suggest that performance similar to this traditional approach can be obtained without training. However, for short packets and/or time-recursive algorithms, significant error floors exist due to the existence of sequences that are indistinguishable without knowledge of the channel. In this work, we first reconsider training signal design in light of recent results in blind sequence detection. We design training codes which combine modulation and training. In order to design these codes, we find an expression for the pairwise error probability of the joint maximum likelihood (JML) channel and sequence estimator. This expression motivates a pairwise distance for the JML receiver based on principal angles between the range spaces of data matrices. The general code design problem (generalized sphere packing) is formulated as the clique problem associated with an unweighted, undirected graph. We provide optimal and heuristic algorithms for this clique problem. For short packets, we demonstrate that significant improvements are possible by jointly considering the design of the training, modulation, and receiver processing. As a practical blind data detection example, data reception in a fiber optical channel is investigated. To get the most out of the data detection methods, auxiliary algorithms such as sampling phase adjustment, decision threshold estimation algorithms are suggested. For the parallel implementation of detectors, semiring structure is introduced both for decision feedback equalizer (DFE) and maximum likelihood sequence detection (MLSD). Timing jitter is another parameter that affects the BER performance of the system. A data-aided clock recovery algorithm reduces the jitter of
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Multiphase flow parameter estimation based on laser scattering
NASA Astrophysics Data System (ADS)
Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.
2015-07-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.
Enhancing parameter precision of optimal quantum estimation by quantum screening
NASA Astrophysics Data System (ADS)
Jiang, Huang; You-Neng, Guo; Qin, Xie
2016-02-01
We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).
Estimating Phenomenological Parameters in Multi-Assets Markets
NASA Astrophysics Data System (ADS)
Raffaelli, Giacomo; Marsili, Matteo
Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.
Cosmological parameter estimation with large scale structure observations
NASA Astrophysics Data System (ADS)
Di Dio, Enea; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, Cl(z1,z2), calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard P(k) analysis with the new Cl(z1,z2) method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the P(k) analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, C0(z1,z2).
Simplified horn antenna parameter estimation using selective criteria
Ewing, P.D.
1991-01-01
An approximation can be used to avoid the complex mathematics and computation methods typically required for calculating the gain and radiation pattern of electromagnetic horn antenna. Because of the curvature of the antenna wave front, calculations using conventional techniques involve solving the Fresnel integrals and using computer-aided numerical integration. With this model, linear approximations give a reasonable estimate of the gain and radiation pattern using simple trigonometric functions, thereby allowing a hand calculator to replace the computer. Applying selected criteria, the case of the E-plane horn antenna was used to evaluate this technique. Results showed that the gain approximation holds for an antenna flare angle of less than 10{degree} for typical antenna dimensions, and the E field radiation pattern approximation holds until the antenna's phase error approaches 60{degree}, both within typical design parameters. This technique is a useful engineering tool. 4 refs., 11 figs.
Multiangle dynamic light scattering analysis using an improved recursion algorithm
NASA Astrophysics Data System (ADS)
Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng
2015-10-01
Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.
Transient analysis of intercalation electrodes for parameter estimation
NASA Astrophysics Data System (ADS)
Devan, Sheba
An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform
Bayesian parameter estimation for stochastic models of biological cell migration
NASA Astrophysics Data System (ADS)
Dieterich, Peter; Preuss, Roland
2013-08-01
Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.
ESTIMATION OF THE VISCOSITY PARAMETER IN ACCRETION DISKS OF BLAZARS
Xie, Z. H.; Ma, L.; Zhang, X.; Du, L. M.; Hao, J. M.; Yi, T. F.; Qiao, E. L.
2009-12-20
For an optical monitoring blazar sample set whose typical minimum variability timescale is about 1 hr, we estimate a mean value of the viscosity parameter in their accretion disk. We assume that optical variability on timescales of hours is caused by local instabilities in the inner accretion disk. Comparing the observed variability timescales to the thermal timescales of alpha-disk models, we could obtain constraints on the viscosity parameter (alpha) and the intrinsic Eddington ratio (L{sup in}/L{sub Edd}=m-dot), 0.104 <= alpha <= 0.337, and 0.0201 <= L {sup in}/L{sub Edd} <= 0.1646. These narrow ranges suggest that all these blazars are observed in a single state, and thus provide a new evidence for the unification of flat-spectrum radio quasars and BL Lacs into a single blazar population. The values of alpha we derive are consistent with the theoretical expectation alpha approx 0.1-0.3 of Narayan and Mcclintock for advection-dominated accretion flow and are also compatible with Pessah et al.'s predictions (alpha >= 0.1) by numerical simulations in which magnetohydrodynamic turbulence is driven by the saturated magnetorotational instability.
Forage quantity estimation from MERIS using band depth parameters
NASA Astrophysics Data System (ADS)
Ullah, Saleem; Yali, Si; Schlerf, Martin
Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
Pharmacokinetic parameter estimations by minimum relative entropy method.
Amisaki, T; Eguchi, S
1995-10-01
For estimating pharmacokinetic parameters, we introduce the minimum relative entropy (MRE) method and compare its performance with least squares methods. There are several variants of least squares, such as ordinary least squares (OLS), weighted least squares, and iteratively reweighted least squares. In addition to these traditional methods, even extended least squares (ELS), a relatively new approach to nonlinear regression analysis, can be regarded as a variant of least squares. These methods are different from each other in their manner of handling weights. It has been recognized that least squares methods with an inadequate weighting scheme may cause misleading results (the "choice of weights" problem). Although least squares with uniform weights, i.e., OLS, is rarely used in pharmacokinetic analysis, it offers the principle of least squares. The objective function of OLS can be regarded as a distance between observed and theoretical pharmacokinetic values on the Euclidean space RN, where N is the number of observations. Thus OLS produces its estimates by minimizing the Euclidean distance. On the other hand, MRE works by minimizing the relative entropy which expresses discrepancy between two probability densities. Because pharmacokinetic functions are not density function in general, we use a particular form of the relative entropy whose domain is extended to the space of all positive functions. MRE never assumes any distribution of errors involved in observations. Thus, it can be a possible solution to the choice of weights problem. Moreover, since the mathematical form of the relative entropy, i.e., an expectation of the log-ratio of two probability density functions, is different from that of a usual Euclidean distance, the behavior of MRE may be different from those of least squares methods. To clarify the behavior of MRE, we have compared the performance of MRE with those of ELS and OLS by carrying out an intensive simulation study, where four pharmaco
Recursive computer architecture for VLSI
Treleaven, P.C.; Hopkins, R.P.
1982-01-01
A general-purpose computer architecture based on the concept of recursion and suitable for VLSI computer systems built from replicated (lego-like) computing elements is presented. The recursive computer architecture is defined by presenting a program organisation, a machine organisation and an experimental machine implementation oriented to VLSI. The experimental implementation is being restricted to simple, identical microcomputers each containing a memory, a processor and a communications capability. This future generation of lego-like computer systems are termed fifth generation computers by the Japanese. 30 references.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
Parameter estimation for boundary value problems by integral equations of the second kind
NASA Technical Reports Server (NTRS)
Kojima, Fumio
1988-01-01
This paper is concerned with the parameter estimation for boundary integral equations of the second kind. The parameter estimation technique through use of the spline collocation method is proposed. Based on the compactness assumption imposed on the parameter space, the convergence analysis for the numerical method of parameter estimation is discussed. The results obtained here are applied to a boundary parameter estimation for 2-D elliptic systems.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Astrophysics Data System (ADS)
Godines, Cody R.; Manteufel, Randall D.
2002-12-01
, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
Quantiles, parametric-select density estimation, and bi-information parameter estimators
NASA Technical Reports Server (NTRS)
Parzen, E.
1982-01-01
A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Use of Dual-wavelength Radar for Snow Parameter Estimates
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew
2005-01-01
Use of dual-wavelength radar, with properly chosen wavelengths, will significantly lessen the ambiguities in the retrieval of microphysical properties of hydrometeors. In this paper, a dual-wavelength algorithm is described to estimate the characteristic parameters of the snow size distributions. An analysis of the computational results, made at X and Ka bands (T-39 airborne radar) and at S and X bands (CP-2 ground-based radar), indicates that valid estimates of the median volume diameter of snow particles, D(sub 0), should be possible if one of the two wavelengths of the radar operates in the non-Rayleigh scattering region. However, the accuracy may be affected to some extent if the shape factors of the Gamma function used for describing the particle distribution are chosen far from the true values or if cloud water attenuation is significant. To examine the validity and accuracy of the dual-wavelength radar algorithms, the algorithms are applied to the data taken from the Convective and Precipitation-Electrification Experiment (CaPE) in 1991, in which the dual-wavelength airborne radar was coordinated with in situ aircraft particle observations and ground-based radar measurements. Having carefully co-registered the data obtained from the different platforms, the airborne radar-derived size distributions are then compared with the in-situ measurements and ground-based radar. Good agreement is found for these comparisons despite the uncertainties resulting from mismatches of the sample volumes among the different sensors as well as spatial and temporal offsets.
Neural Models: An Option to Estimate Seismic Parameters of Accelerograms
NASA Astrophysics Data System (ADS)
Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.
2014-12-01
Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.
How Learning Logic Programming Affects Recursion Comprehension
ERIC Educational Resources Information Center
Haberman, Bruria
2004-01-01
Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural programming…
Anaerobic biodegradability of fish remains: experimental investigation and parameter estimation.
Donoso-Bravo, Andres; Bindels, Francoise; Gerin, Patrick A; Vande Wouwer, Alain
2015-01-01
The generation of organic waste associated with aquaculture fish processing has increased significantly in recent decades. The objective of this study is to evaluate the anaerobic biodegradability of several fish processing fractions, as well as water treatment sludge, for tilapia and sturgeon species cultured in recirculated aquaculture systems. After substrate characterization, the ultimate biodegradability and the hydrolytic rate were estimated by fitting a first-order kinetic model with the biogas production profiles. In general, the first-order model was able to reproduce the biogas profiles properly with a high correlation coefficient. In the case of tilapia, the skin/fin, viscera, head and flesh presented a high level of biodegradability, above 310 mLCH₄gCOD⁻¹, whereas the head and bones showed a low hydrolytic rate. For sturgeon, the results for all fractions were quite similar in terms of both parameters, although viscera presented the lowest values. Both the substrate characterization and the kinetic analysis of the anaerobic degradation may be used as design criteria for implementing anaerobic digestion in a recirculating aquaculture system. PMID:25812103
Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy
NASA Astrophysics Data System (ADS)
Kniffin, Gabriel Paul
Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.
Estimation of cosmological parameters using adaptive importance sampling
Wraith, Darren; Kilbinger, Martin; Benabed, Karim; Prunet, Simon; Cappe, Olivier; Fort, Gersende; Cardoso, Jean-Francois; Robert, Christian P.
2009-07-15
We present a Bayesian sampling algorithm called adaptive importance sampling or population Monte Carlo (PMC), whose computational workload is easily parallelizable and thus has the potential to considerably reduce the wall-clock time required for sampling, along with providing other benefits. To assess the performance of the approach for cosmological problems, we use simulated and actual data consisting of CMB anisotropies, supernovae of type Ia, and weak cosmological lensing, and provide a comparison of results to those obtained using state-of-the-art Markov chain Monte Carlo (MCMC). For both types of data sets, we find comparable parameter estimates for PMC and MCMC, with the advantage of a significantly lower wall-clock time for PMC. In the case of WMAP5 data, for example, the wall-clock time scale reduces from days for MCMC to hours using PMC on a cluster of processors. Other benefits of the PMC approach, along with potential difficulties in using the approach, are analyzed and discussed.
Cosmological parameter estimation with large scale structure observations
Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Julien.Lesgourgues@cern.ch
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, C{sub ℓ}(z{sub 1},z{sub 2}), calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard P(k) analysis with the new C{sub ℓ}(z{sub 1},z{sub 2}) method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the P(k) analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, C{sub 0}(z{sub 1},z{sub 2})
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 1996
NASA Astrophysics Data System (ADS)
Sovers, O. J.; Jacobs, Christopher S.
1996-08-01
The current theoretical model of radio interferometric delays and delay rates observed in very long baseline interferometry experiments is discussed in detail. Modeling the time delay consists of a number of steps. First, the locations of the observing stations are expressed in an Earth fixed coordinate frame at the time that the incoming wave front reaches the reference station. These station coordinates are modified by Earth-fixed effects, such as tides and tectonic motion. Next, a transformation to a celestial coordinate system moving with the Earth accounts for the Earth's precession and nutation in inertial space. A relativistic transformation then brings these coordinates into a frame centered athlete center of mass of the Solar System. The time delays calculated in this Solar System Barycentric frame, including corrections to account for extended source structure of the source and gravitational delay of the signal. Finally, he delay is transformed back to the celestial geocentric frame, and corrected for additional delays of the signal by components of the Earth's atmosphere. Partial derivatives of the observables with respect to numerous parameters entering the model components are also given. This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software "MODEST" -1994 dated August 1994. It supersedes that document and its five previous versions (1983, 1985, 1986, 1987, and 1991). Numerous portions of the Very Long Baseline Interferometry (VLBI) model were improved in MODEST from 1994 to 1996. For various aspects of the geometric delay, improved expressions for the geodetic latitude and station altitude are now used, along with more recent values of the Earth's radius and rotation rate. The equation of equinoxes can now be selected to be the IERS-92 expression, plus its 1997 extension. Models for the tidal response of the Earth orientation now include Dickman's revision (UT1S) of Yoder et
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Jacobs, C. S.
1994-01-01
This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.
Estimation of uranium migration parameters in sandstone aquifers.
Malov, A I
2016-03-01
The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are
ERIC Educational Resources Information Center
Sireci, Stephen G.
Whether item response theory (IRT) is useful to the small-scale testing practitioner is examined. The stability of IRT item parameters is evaluated with respect to the classical item parameters (i.e., p-values, biserials) obtained from the same data set. Previous research investigating the effect of sample size on IRT parameter estimation has…
Variational methods to estimate terrestrial ecosystem model parameters
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Astrophysics Data System (ADS)
Nejad, S.; Gladwin, D. T.; Stone, D. A.
2016-06-01
This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.
Recursive delay calculation unit for parametric beamformer
NASA Astrophysics Data System (ADS)
Nikolov, Svetoslav I.; Jensen, Jørgen A.; Tomov, Borislav
2006-03-01
This paper presents a recursive approach for parametric delay calculations for a beamformer. The suggested calculation procedure is capable of calculating the delays for any image line defined by an origin and arbitrary direction. It involves only add and shift operations making it suitable for hardware implementation. One delaycalculation unit (DCU) needs 4 parameters, and all operations can be implemented using fixed-point arithmetics. An N-channel system needs N+ 1 DCUs per line - one for the distance from the transmit origin to the image point and N for the distances from the image point to each of the receivers. Each DCU recursively calculates the square of the distance between a transducer element and a point on the beamformed line. Then it finds the approximate square root. The distance to point i is used as an initial guess for point i + 1. Using fixed-point calculations with 36-bit precision gives an error in the delay calculations on the order of 1/64 samples, at a sampling frequency of f s = 40 MHz. The circuit has been synthesized for a Virtex II Pro device speed grade 6 in two versions - a pipelined and a non-pipelined producing 150 and 30 million delays per second, respectively. The non-pipelined circuit occupies about 0.5 % of the FPGA resources and the pipelined one about 1 %. When the square root is found with a pipelined CORDIC processor, 2 % of the FPGA slices are used to deliver 150 million delays per second.
Improvement in Recursive Hierarchical Segmentation of Data
NASA Technical Reports Server (NTRS)
Tilton, James C.
2006-01-01
A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.
Core Recursive Hierarchical Image Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James
2011-01-01
The Recursive Hierarchical Image Segmentation (RHSEG) software has been repackaged to provide a version of the RHSEG software that is not subject to patent restrictions and that can be released to the general public through NASA GSFC's Open Source release process. Like the Core HSEG Software Package, this Core RHSEG Software Package also includes a visualization program called HSEGViewer along with a utility program HSEGReader. It also includes an additional utility program called HSEGExtract. The unique feature of the Core RHSEG package is that it is a repackaging of the RHSEG technology designed to specifically avoid the inclusion of the certain software technology. Unlike the Core HSEG package, it includes the recursive portions of the technology, but does not include processing window artifact elimination technology.
Obtaining and estimating kinetic parameters from the literature.
Neves, Susana R
2011-09-20
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on strategies for the development of mathematical models. Many biological processes can be represented mathematically as systems of ordinary differential equations (ODEs). Simulations with these mathematical models can provide mechanistic insight into the underlying biology of the system. A prerequisite for running simulations, however, is the identification of kinetic parameters that correspond closely with the biological reality. This lecture presents an overview of the steps required for the development of kinetic ODE models and describes experimental methods that can yield kinetic parameters and concentrations of reactants, which are essential for the development of kinetic models. Strategies are provided to extract necessary parameters from published data. The homework assignment requires students to find parameters appropriate for a well-studied biological regulatory system, convert these parameters into appropriate units, and interpret how different values of these parameters may lead to different biological behaviors. PMID:21934111
Obtaining and Estimating Kinetic Parameters from the Literature
Neves, Susana R.
2014-01-01
This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on strategies for the development of mathematical models. Many biological processes can be represented mathematically as systems of ordinary differential equations (ODEs). Simulations with these mathematical models can provide mechanistic insight into the underlying biology of the system. A prerequisite for running simulations, however, is the identification of kinetic parameters that correspond closely with the biological reality. This lecture presents an overview of the steps required for the development of kinetic ODE models and describes experimental methods that can yield kinetic parameters and concentrations of reactants, which are essential for the development of kinetic models. Strategies are provided to extract necessary parameters from published data. The homework assignment requires students to find parameters appropriate for a well-studied biological regulatory system, convert these parameters into appropriate units, and interpret how different values of these parameters may lead to different biological behaviors. PMID:21934111
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
2010-01-01
Background The use of structural equation models for the analysis of recursive and simultaneous relationships between phenotypes has become more popular recently. The aim of this paper is to illustrate how these models can be applied in animal breeding to achieve parameterizations of different levels of complexity and, more specifically, to model phenotypic recursion between three calving traits: gestation length (GL), calving difficulty (CD) and stillbirth (SB). All recursive models considered here postulate heterogeneous recursive relationships between GL and liabilities to CD and SB, and between liability to CD and liability to SB, depending on categories of GL phenotype. Methods Four models were compared in terms of goodness of fit and predictive ability: 1) standard mixed model (SMM), a model with unstructured (co)variance matrices; 2) recursive mixed model 1 (RMM1), assuming that residual correlations are due to the recursive relationships between phenotypes; 3) RMM2, assuming that correlations between residuals and contemporary groups are due to recursive relationships between phenotypes; and 4) RMM3, postulating that the correlations between genetic effects, contemporary groups and residuals are due to recursive relationships between phenotypes. Results For all the RMM considered, the estimates of the structural coefficients were similar. Results revealed a nonlinear relationship between GL and the liabilities both to CD and to SB, and a linear relationship between the liabilities to CD and SB. Differences in terms of goodness of fit and predictive ability of the models considered were negligible, suggesting that RMM3 is plausible. Conclusions The applications examined in this study suggest the plausibility of a nonlinear recursive effect from GL onto CD and SB. Also, the fact that the most restrictive model RMM3, which assumes that the only cause of correlation is phenotypic recursion, performs as well as the others indicates that the phenotypic recursion
Recursive Abstractions for Parameterized Systems
NASA Astrophysics Data System (ADS)
Jaffar, Joxan; Santosa, Andrew E.
We consider a language of recursively defined formulas about arrays of variables, suitable for specifying safety properties of parameterized systems. We then present an abstract interpretation framework which translates a paramerized system as a symbolic transition system which propagates such formulas as abstractions of underlying concrete states. The main contribution is a proof method for implications between the formulas, which then provides for an implementation of this abstract interpreter.
Recursive bias estimation and L2 boosting
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.
Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton
2013-12-31
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based
Space-based tactical ballistic missile launch parameter estimation
NASA Astrophysics Data System (ADS)
Danis, Norman J.
1993-04-01
The influence of a priori uncertainties in launch time and trajectory fly-out profiles, along with sensor angle measurement errors, on the estimation of missile launch location and heading angle is examined. An error model was developed to compute the statistics of the estimation errors using a single pair of angle measurements, one from each of two satellites, or both from the same satellite platform. The measurements and estimation methods are described, and the estimation errors are derived for the hypothetical case of perfect knowledge of trajectory and launch time. On the basis of this ideal case, the errors are generalized to include trajectory and launch time uncertainties. The results are discussed with the aid of graphics output from a computer model which was run parametrically to highlight important dependences and sensitivities.
Symbolic dynamics approach to parameter estimation without initial value
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Hou, Xubo; Shen, Yi; He, Zhenya
2009-12-01
Symbolic dynamics, which partitions the infinite number of finite length trajectories into a finite number of trajectory sets, allows a simplified and “coarse-grained” description of the dynamics of a system with a limited number of symbols. In this Letter, we will show that control parameters affect dynamical characters of symbolic sequences. To be more specific, we will analyze how control parameters affect statistical property of Skewed Tent map symbolic sequences. Besides, we will also analyze how control parameters affect ergodic property of both Logistic map and Tent map symbolic sequences. Both theoretical and experimental results show that the above mentioned effects of control parameters discourage the use of chaotic symbolic sequences in cryptography. Furthermore, we will propose an improved scheme utilizing asymptotic deterministic randomness to avoid the undesirable effects.
Force field parameter estimation of functional perfluoropolyether lubricants
Smith, Robert; Seung Chung, Pil; Steckel, Janice A.; Jhon, Myung S.; Biegler, Lorenz T.
2011-01-01
The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01
The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01
The head disk interface in a hard disk drive can be considered to be one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models. In this paper, we investigate beyond molecular level and perform ab initio calculations to obtain the force field parameters. Intramolecular force field parameters for Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R; Chung, P S; Steckel, J A; Jhon, M S; Biegler, L T
2011-01-01
The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraolwere evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.
Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang
2016-01-01
The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature. PMID:26551654
Retrospective forecast of ETAS model with daily parameters estimate
NASA Astrophysics Data System (ADS)
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Estimating Building Simulation Parameters via Bayesian Structure Learning
Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards
2013-01-01
Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.
Stellar atmospheric parameter estimation using Gaussian process regression
NASA Astrophysics Data System (ADS)
Bu, Yude; Pan, Jingchang
2015-02-01
As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2015-06-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements
Olama, Mohammed M; Djouadi, Seddik M; Li, Yanyan
2008-07-01
This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Estimated genetic parameters for carcass traits of Brahman cattle.
Riley, D G; Chase, C C; Hammond, A C; West, R L; Johnson, D D; Olson, T A; Coleman, S W
2002-04-01
Heritabilities and genetic and phenotypic correlations were estimated from feedlot and carcass data collected from Brahman calves (n = 504) in central Florida from 1996 to 2000. Data were analyzed using animal models in MTDFREML. Models included contemporary group (n = 44; groups of calves of the same sex, fed in the same pen, slaughtered on the same day) as a fixed effect and calf age in days at slaughter as a continuous variable. Estimated feedlot trait heritabilities were 0.64, 0.67, 0.47, and 0.26 for ADG, hip height at slaughter, slaughter weight, and shrink. The USDA yield grade estimated heritability was 0.71; heritabilities for component traits of yield grade, including hot carcass weight, adjusted 12th rib backfat thickness, loin muscle area, and percentage kidney, pelvic, and heart fat were 0.55, 0.63, 0.44, and 0.46, respectively. Heritability estimates for dressing percentage, marbling score, USDA quality grade, cutability, retail yield, and carcass hump height were 0.77, 0.44, 0.47, 0.71, 0.5, and 0.54, respectively. Estimated genetic correlations of adjusted 12th rib backfat thickness with ADG, slaughter weight, marbling score, percentage kidney, pelvic, and heart fat, and yield grade (0.49, 0.46, 0.56, 0.63, and 0.93, respectively) were generally larger than most literature estimates. Estimated genetic correlations of marbling score with ADG, percentage shrink, loin muscle area, percentage kidney, pelvic, and heart fat, USDA yield grade, cutability, retail yield, and carcass hump height were 0.28, 0.49, 0.44, 0.27, 0.45, -0.43, 0.27, and 0.43, respectively. Results indicate that sufficient genetic variation exists within the Brahman breed for design and implementation of effective selection programs for important carcass quality and yield traits. PMID:12008662
Four odontometric parameters as a forensic tool in stature estimation
Khangura, Rajbir Kaur; Sircar, Keya; Grewal, Dilpreet Singh
2015-01-01
Objective: The study was conducted to investigate the possibility of predicting the height of an individual using selected odontometric parameters as a forensic tool. Materials and Methods: The study sample consisted of 100 subjects (50 male and 50 female). Measurements of intercanine width (IC), interpremolar width (IP), mesiodistal dimension of six permanent maxillary anterior teeth (CW), and arch length (AL, canine to canine) were made directly on the subject. The data collected were subjected to statistical analysis and a linear regression formula was obtained against each odontometric parameter. Results: Highly significant correlation was observed between height and intercanine width, interpremolar width (P < 0.0001), whereas correlation between height and the combined width of six anterior teeth and arch length was found to be not significant. The linear regression equation using formula y = c + mx was obtained for each odontometric parameter and also for combined parameters. Conclusion: Hence the study concludes that the two odontometric parameters such as intercanine width and interpremolar width can be used successfully to calculate the stature of an individual from fragmentary remains. PMID:26005302
Empirical estimation of school siting parameter towards improving children's safety
NASA Astrophysics Data System (ADS)
Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.
2014-02-01
Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.
Ionospheric parameters estimation using GLONASS/GPS data
NASA Astrophysics Data System (ADS)
Sidorenko, K. A.; Vasenina, A. A.
2016-05-01
In recent time, GLONASS/GPS signals are widely used for continuous monitoring of the ionospheric plasma conditions. However the ground-based GLONASS/GPS station for sounding of the ionosphere cannot provide high spatial resolution. To solve this problem, ionospheric models are used. In this article the algorithm of ionospheric model adaptation according to GLONASS/GPS data is offered. The expediency of use as the adaptive parameter values of the solar 10.7 cm radio flux index, which characterizes the level of solar activity, is shown. The adaptation parameter is defined by the minimization of the difference between the correlation matrices of the experimental and modeled TEC values. Reducing errors in the determination of ionospheric parameters in comparison with the simulation results is confirmed by experimental works in Moscow.
Phase noise effects on turbulent weather radar spectrum parameter estimation
NASA Technical Reports Server (NTRS)
Lee, Jonggil; Baxa, Ernest G., Jr.
1990-01-01
Accurate weather spectrum moment estimation is important in the use of weather radar for hazardous windshear detection. The effect of the stable local oscillator (STALO) instability (jitter) on the spectrum moment estimation algorithm is investigated. Uncertainty in the stable local oscillator will affect both the transmitted signal and the received signal since the STALO provides transmitted and reference carriers. The proposed approach models STALO phase jitter as it affects the complex autocorrelation of the radar return. The results can therefore by interpreted in terms of any source of system phase jitter for which the model is appropriate and, in particular, may be considered as a cumulative effect of all radar system sources.
Laboratory longitudinal diffusion tests: 2. Parameter estimation by inverse analysis.
Takeda, M; Zhang, M; Nakajima, H; Hiratsuka, T
2008-04-28
This study focuses on the verification of test interpretations for different state analyses of diffusion experiments. Part 1 of this study identified that steady, quasi-steady and equilibrium state analyses for the through- and in-diffusion tests with solution reservoirs are generally feasible where the tracer is not highly sorptive. In Part 2 we investigate parameter identifiability in transient-state analysis of reservoir concentration variation using a numerical approach. For increased generality, the analytical models, objective functions and Jacobian matrix necessary for inverse analysis of transient-state data are reformulated using unified dimensionless parameters. In these dimensionless forms, the number of unknown parameters is reduced and a single dimensionless parameter represents the sorption property. The dimensionless objective functions are evaluated for individual test methods and parameter identifiability is discussed in relation to the sorption property. The effects of multiple minima and measurement error on parameter identifiability are also investigated. The main findings are that inverse problems for inlet and outlet reservoir concentration analyses are generally unstable and well-posed, respectively. Where the tracer is sorptive, the inverse problem for the inlet reservoir concentration analysis may have multiple minima. When insufficient measurement data is collected, multiple solutions may result and this should be taken into consideration when inversely analyzing data including that of inlet reservoir concentration. Verification of test interpretation by cross-checking different state analyses is feasible where the tracer is not highly sorptive. In an actual experiment, test interpretation validity is demonstrated through consistency between theory and practice for different state analyses. PMID:18353488
Proper estimation of hydrological parameters from flood forecasting aspects
NASA Astrophysics Data System (ADS)
Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Yanami, Hitoshi; Anai, Hirokazu
2016-04-01
The hydrological parameters of a flood forecasting model are normally calibrated based on an entire hydrograph of past flood events by means of an error assessment function such as mean square error and relative error. However, the specific parts of a hydrograph, i.e., maximum discharge and rising parts, are particularly important for practical flood forecasting in the sense that underestimation may lead to a more dangerous situation due to delay in flood prevention and evacuation activities. We conducted numerical experiments to find the most proper parameter set for practical flood forecasting without underestimation in order to develop an error assessment method for calibration appropriate for flood forecasting. A distributed hydrological model developed in Public Works Research Institute (PWRI) in Japan was applied to fifteen past floods in the Gokase River basin of 1,820km2 in Japan. The model with gridded two-layer tanks for the entire target river basin included hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which were set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and Digital Soil Map of the World (DSMW), were employed as input data for elevation, land use and soil type. The values of fourteen types of parameters were evenly sampled with 10,001 patterns of parameter sets determined by the Latin Hypercube Sampling within the search range of each parameter. Although the best reproduced case showed a high Nash-Sutcliffe Efficiency of 0.9 for all flood events, the maximum discharge was underestimated in many flood cases. Therefore, two conditions, which were non-underestimation in the maximum discharge and rising parts of a hydrograph, were added in calibration as the flood forecasting aptitudes. The cases with non-underestimation in the maximum discharge and rising parts of the hydrograph also showed a high Nash-Sutcliffe Efficiency of 0.9 except two flood cases
The physical parameters estimation of physiologically worked heart prosthesis
NASA Astrophysics Data System (ADS)
Gawlikowski, M.; Pustelny, T.; Kustosz, R.
2006-11-01
One of possible cardiac failure therapy is mechanical heart supporting. The following types of ventricular assist devices (VAD) are clinically used: diaphragm displacement, centrifugal and axial pumps. Each of supporting devices produces different hemodynamical effect and affects the circulatory system in various ways. It causes impossibility of therapeutic effect comparison obtained by different pumps' treatment. A lack of defined physical parameters describing phenomena inside the pump and its influence on circulatory system are an obstacle during new supporting devices designing. The goal of investigations is to create a set of physical parameters which characterized pump's operating and its cooperation with circulatory system.
Wagner, B.J.
1992-01-01
Parameter estimation and contaminant source characterization are key steps in the development of a coupled groundwater flow and contaminant transport simulation model. Here a methodologyfor simultaneous model parameter estimation and source characterization is presented. The parameter estimation/source characterization inverse model combines groundwater flow and contaminant transport simulation with non-linear maximum likelihood estimation to determine optimal estimates of the unknown model parameters and source characteristics based on measurements of hydraulic head and contaminant concentration. First-order uncertainty analysis provides a means for assessing the reliability of the maximum likelihood estimates and evaluating the accuracy and reliability of the flow and transport model predictions. A series of hypothetical examples is presented to demonstrate the ability of the inverse model to solve the combined parameter estimation/source characterization inverse problem. Hydraulic conductivities, effective porosity, longitudinal and transverse dispersivities, boundary flux, and contaminant flux at the source are estimated for a two-dimensional groundwater system. In addition, characterization of the history of contaminant disposal or location of the contaminant source is demonstrated. Finally, the problem of estimating the statistical parameters that describe the errors associated with the head and concentration data is addressed. A stage-wise estimation procedure is used to jointly estimate these statistical parameters along with the unknown model parameters and source characteristics. ?? 1992.
Estimating Cabbage Physical Parameters Using Remote Sensing Technology
Technology Transfer Automated Retrieval System (TEKTRAN)
Remote sensing has long been used as a tool to extract plant growth and yield information for many crops, but little research has been conducted on cabbage (Brassica oleracea) with this technology. The objective of this study was to evaluate aerial photography and field reflectance spectra for estim...
A Simplified Estimation of Latent State--Trait Parameters
ERIC Educational Resources Information Center
Hagemann, Dirk; Meyerhoff, David
2008-01-01
The latent state-trait (LST) theory is an extension of the classical test theory that allows one to decompose a test score into a true trait, a true state residual, and an error component. For practical applications, the variances of these latent variables may be estimated with standard methods of structural equation modeling (SEM). These…
EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES
Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...
Parameter estimation of multiple item response profile model.
Cho, Sun-Joo; Partchev, Ivailo; De Boeck, Paul
2012-11-01
Multiple item response profile (MIRP) models are models with crossed fixed and random effects. At least one between-person factor is crossed with at least one within-person factor, and the persons nested within the levels of the between-person factor are crossed with the items within levels of the within-person factor. Maximum likelihood estimation (MLE) of models for binary data with crossed random effects is challenging. This is because the marginal likelihood does not have a closed form, so that MLE requires numerical or Monte Carlo integration. In addition, the multidimensional structure of MIRPs makes the estimation complex. In this paper, three different estimation methods to meet these challenges are described: the Laplace approximation to the integrand; hierarchical Bayesian analysis, a simulation-based method; and an alternating imputation posterior with adaptive quadrature as the approximation to the integral. In addition, this paper discusses the advantages and disadvantages of these three estimation methods for MIRPs. The three algorithms are compared in a real data application and a simulation study was also done to compare their behaviour. PMID:22070786
Unconstrained parameter estimation for assessment of dynamic cerebral autoregulation.
Chacón, M; Nuñez, N; Henríquez, C; Panerai, R B
2008-10-01
Measurement of dynamic cerebral autoregulation (CA), the transient response of cerebral blood flow (CBF) to changes in arterial blood pressure (ABP), has been performed with an index of autoregulation (ARI), related to the parameters of a second-order differential equation model, namely gain (K), damping factor (D) and time constant (T). Limitations of the ARI were addressed by increasing its numerical resolution and generalizing the parameter space. In 16 healthy subjects, recordings of ABP (Finapres) and CBF velocity (ultrasound Doppler) were performed at rest, before, during and after 5% CO(2) breathing, and for six repeated thigh cuff maneuvers. The unconstrained model produced lower predictive error (p < 0.001) than the original model. Unconstrained parameters (K'-D'-T') were significantly different from K-D-T but were still sensitive to different measurement conditions, such as the under-regulation induced by hypercapnia. The intra-subject variability of K' was significantly lower than that of the ARI and this parameter did not show the unexpected occurrences of zero values as observed with the ARI and the classical value of K. These results suggest that K' could be considered as a more stable and reliable index of dynamic autoregulation than ARI. Further studies are needed to validate this new index under different clinical conditions. PMID:18799835
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.
2012-07-01
An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Parameter estimation of analog circuits based on the fractional wavelet method
NASA Astrophysics Data System (ADS)
Yong, Deng; He, Zhang
2015-03-01
Aiming at the problem of parameter estimation in analog circuits, a new approach is proposed. The approach is based on the fractional wavelet to derive the Volterra series model of the circuit under test (CUT). By the gradient search algorithm used in the Volterra model, the unknown parameters in the CUT are estimated and the Volterra model is identified. The simulations show that the parameter estimation results of the proposed method in the paper are better than those of other parameter estimation methods. Project supported by the Key Research Project of Sichuan Provincial Department of Education, China (No. 13ZA0186).
Geo-Statistical Approach to Estimating Asteroid Exploration Parameters
NASA Technical Reports Server (NTRS)
Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles
2011-01-01
NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.
Marker-based estimation of genetic parameters in genomics.
Hu, Zhiqiu; Yang, Rong-Cai
2014-01-01
Linear mixed model (LMM) analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS) as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets. PMID:25025305
Marker-Based Estimation of Genetic Parameters in Genomics
Hu, Zhiqiu; Yang, Rong-Cai
2014-01-01
Linear mixed model (LMM) analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS) as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing ‘big’ genomic data sets. PMID:25025305
Estimation of groundwater recharge parameters by time series analysis.
Naff, R.L.; Gutjahr, A.L.
1983-01-01
A model is proposed that relates water level fluctuations in a Dupuit aquifer to effective precipitation at the top of the unsaturated zone. Effective precipitation, defined herein as that portion of precipitation which becomes recharge, is related to precipitation measured in a nearby gage by a two-parameter function. A second-order stationary assumption is used to connect the spectra of effective precipitation and water level fluctuations.-from Authors
Estimability of geodetic parameters from space VLBI observables
NASA Technical Reports Server (NTRS)
Adam, Jozsef
1990-01-01
The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.
Sensor-less parameter estimation of electromagnetic transducer and experimental verification
NASA Astrophysics Data System (ADS)
Ikegame, Toru; Takagi, Kentaro; Inoue, Tsuyoshi; Jikuya, Ichiro
2015-04-01
In this paper, a new sensor-less parameter estimation method is proposed for electromagnetic shunt damping. The purpose is to estimate parameters of an electromagnetic transducer and a vibrating structure. The frequency domain measurements of an electrical admittance are only supposed to be available but any other sensor measurements are not; therefore, the estimation problem is nontrivial. Two types of numerical optimization, a linear optimization to select an initial seed and a nonlinear optimization to determine a final estimate, are presented. The effectiveness of the method is demonstrated by vibration control experiments as well as parameter estimation experiments.
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…
NASA Astrophysics Data System (ADS)
Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Flight investigation of various control inputs intended for parameter estimation
NASA Technical Reports Server (NTRS)
Shafer, M. F.
1984-01-01
NASA's F-8 digital fly-by-wire aircraft has been subjected to stability and control derivative assessments, leading to the proposal of improved control inputs for more efficient control derivative estimation. This will reduce program costs by reducing flight test and data analysis requirements. Inputs were divided into sinusoidal types and cornered types. Those with corners produced the best set of stability and control derivatives for the unaugmented flight control system mode. Small inputs are noted to have provided worse derivatives than larger ones.
Surface Parameter Estimation using Interferometric Coherences between Different Polarisations
NASA Astrophysics Data System (ADS)
Hajnsek, I.; Alvarez-Perez, J.-L.; Papathanassiou, K. P.; Moreira, A.; Cloude, S. R.
2003-04-01
In this work the potential of using the interferometric coherence at different polarisations over surface scat- terers in order to extract information about surface parameters is investigated. For the first time the sensitivity of the indi- vidual coherence contributions to surface roughness and moisture conditions is discussed and simulated using a novel hy- brid polarimetric surface scattering model. The model itself consists of two components, a coherent part obtained from the extended Bragg model and an incoherent part obtained from the integral equation model. Finally, experimental airborne SAR data are used to validate the modeled elements of the Pauli scattering vector.
Determining the Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1995-01-01
An important part of building mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. In this work, an expression for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates with colored residuals is developed and validated. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle (HARV). As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, while conventional parameter accuracy measures were optimistic.
Estimation of forest parameters using airborne laser scanning data
NASA Astrophysics Data System (ADS)
Cohen, J.
2015-12-01
Methods for the estimation of forest characteristics by airborne laser scanning (ALS) data have been introduced by several authors. Tree height (TH) and canopy closure (CC) describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI) datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
Estimating canopy fuel parameters for Atlantic Coastal Plain forest types.
Parresol, Bernard, R.
2007-01-15
Abstract It is necessary to quantify forest canopy characteristics to assess crown fire hazard, prioritize treatment areas, and design treatments to reduce crown fire potential. A number of fire behavior models such as FARSITE, FIRETEC, and NEXUS require as input four particular canopy fuel parameters: 1) canopy cover, 2) stand height, 3) crown base height, and 4) canopy bulk density. These canopy characteristics must be mapped across the landscape at high spatial resolution to accurately simulate crown fire. Currently no models exist to forecast these four canopy parameters for forests of the Atlantic Coastal Plain, a region that supports millions of acres of loblolly, longleaf, and slash pine forests as well as pine-broadleaf forests and mixed species broadleaf forests. Many forest cover types are recognized, too many to efficiently model. For expediency, forests of the Savannah River Site are categorized as belonging to 1 of 7 broad forest type groups, based on composition: 1) loblolly pine, 2) longleaf pine, 3) slash pine, 4) pine-hardwood, 5) hardwood-pine, 6) hardwoods, and 7) cypress-tupelo. These 7 broad forest types typify forests of the Atlantic Coastal Plain region, from Maryland to Florida.
Simultaneous parameters identifiability and estimation of an E. coli metabolic network model.
Pontes Freitas Alberton, Kese; Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende
2015-01-01
This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103
Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model
Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende
2015-01-01
This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is
Kim, Jeongtae; Seok, Jiyeong
2013-03-11
We analyze the statistical properties of the maximum likelihood estimator, least squares estimator, and Pearson's χ(2)-based and Neyman's χ(2)-based estimators for the estimation of decay constants and amplitudes for fluorescence lifetime imaging. Our analysis is based on the linearization of the gradient of the objective functions around true parameters. The analysis shows that only the maximum likelihood estimator based on the Poisson likelihood function yields unbiased and efficient estimation. All other estimators yield either biased or inefficient estimations. We validate our analysis by using simulations. PMID:23482174
Being surveyed can change later behavior and related parameter estimates
Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit
2011-01-01
Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314
Estimation of point explosion parameters by body-wave spectra
NASA Astrophysics Data System (ADS)
Tsereteli, Nino; Kereselidze, Zurab
2014-05-01
Radial model of point explosion is presented. According to this model the epicenter are consists with two qualitatively different spherical area. In the first sphere the explosion energy is spent on plastic deformations. The second spherical area, where the medium are elastically, presents area where the body waves are generated. The frequency spectrum of this wave can presents the intrinsic frequency of natural oscillations of the point explosion. The Euler radial equation was used during the modeling of this process. Using analytical equation of discrete frequency spectrum is possible to solve the inverse seismological problem. In other words it is possible to calculate the internal and external radius of elastic area. Finally we can obtain a sufficiently correct analytic solution to define the linear characteristics of the point explosion area and estimating the energy released.
Being surveyed can change later behavior and related parameter estimates.
Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Karlan, Dean S; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit
2011-02-01
Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314
BIASES IN PHYSICAL PARAMETER ESTIMATES THROUGH DIFFERENTIAL LENSING MAGNIFICATION
Er Xinzhong; Ge Junqiang; Mao Shude
2013-06-20
We study the lensing magnification effect on background galaxies. Differential magnification due to different magnifications of different source regions of a galaxy will change the lensed composite spectra. The derived properties of the background galaxies are therefore biased. For simplicity, we model galaxies as a superposition of an axis-symmetric bulge and a face-on disk in order to study the differential magnification effect on the composite spectra. We find that some properties derived from the spectra (e.g., velocity dispersion, star formation rate, and metallicity) are modified. Depending on the relative positions of the source and the lens, the inferred results can be either over- or underestimates of the true values. In general, for an extended source at strong lensing regions with high magnifications, the inferred physical parameters (e.g., metallicity) can be strongly biased. Therefore, detailed lens modeling is necessary to obtain the true properties of the lensed galaxies.
Tumor parameter estimation considering the body geometry by thermography.
Hossain, Shazzat; Mohammadi, Farah A
2016-09-01
Implementation of non-invasive, non-contact, radiation-free thermal diagnostic tools requires an accurate correlation between surface temperature and interior physiology derived from living bio-heat phenomena. Such associations in the chest, forearm, and natural and deformed breasts have been investigated using finite element analysis (FEA), where the geometry and heterogeneity of an organ are accounted for by creating anatomically-accurate FEA models. The quantitative links are involved in the proposed evolutionary methodology for forecasting unknown Physio-thermo-biological parameters, including the depth, size and metabolic rate of the underlying nodule. A Custom Genetic Algorithm (GA) is tailored to parameterize a tumor by minimizing a fitness function. The study has employed the finite element method to develop simulated data sets and gradient matrix. Furthermore, simulated thermograms are obtained by enveloping the data sets with ±10% random noise. PMID:27416548
Using Spreadsheets to Help Students Think Recursively
ERIC Educational Resources Information Center
Webber, Robert P.
2012-01-01
Spreadsheets lend themselves naturally to recursive computations, since a formula can be defined as a function of one of more preceding cells. A hypothesized closed form for the "n"th term of a recursive sequence can be tested easily by using a spreadsheet to compute a large number of the terms. Similarly, a conjecture about the limit of a series…
Assessing the Effect of Model-Data Misfit on the Invariance Property of IRT Parameter Estimates.
ERIC Educational Resources Information Center
Fan, Xitao; Ping, Yin
This study empirically investigated the potential negative effect of item response theory (IRT) model-data misfit on the degree of invariance of: (1) IRT item parameter estimates (item difficulty and discrimination); and (2) IRT person ability parameter estimates. A large-scale statewide assessment program test database was used, for which the…
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
ERIC Educational Resources Information Center
Sinar, Evan F.; Zickar, Michael J.
2002-01-01
Examined the influence of deviant scale items on item parameter estimates of focal scale items and person parameter estimates through a comparison of item response theory (IRT) and classical test theory (CTT) models. Used Monte Carlo methods to explore results from a pilot investigation of job attitude data. Discusses implications for researchers…
Eakin, T; Shouman, R; Qi, Y; Liu, G; Witten, M
1995-05-01
Studies of the biology of aging (both experimental and evolutionary) frequently involve the estimation of parameters arising in various multi-parameter survival models such as the Gompertz or Weibull distribution. Standard parameter estimation methodologies, such as maximum likelihood estimation (MLE) or nonlinear regression (NLR), require knowledge of the actual life spans or their explicit algebraic equivalents in order to provide reliable parameter estimates. Many fundamental biological discussions and conclusions are highly dependent upon accurate estimates of these survival parameters (this has historically been the case in the study of genetic and environmental effects on longevity and the evolutionary biology of aging). In this article, we examine some of the issues arising in the estimation of gerontologic survival model parameters. We not only address issues of accuracy when the original life-span data are unknown, we consider the accuracy of the estimates even when the exact life spans are known. We examine these issues as applied to known experimental data on diet restriction and we fit the frequently used, two-parameter Gompertzian survival distribution to these experimental data. Consequences of methodological misuse are demonstrated and subsequently related to the values of the final parameter estimates and their associated errors. These results generalize to other multiparametric distributions such as the Weibull, Makeham, and logistic survival distributions. PMID:7743396
Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.
ERIC Educational Resources Information Center
Harwell, Michael R.; And Others
1988-01-01
The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)
Conjugate gradient algorithms using multiple recursions
Barth, T.; Manteuffel, T.
1996-12-31
Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.
Estimation of cauliflower mass transfer parameters during convective drying
NASA Astrophysics Data System (ADS)
Sahin, Medine; Doymaz, İbrahim
2016-05-01
The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities (D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.
Adaptive neuro-fuzzy estimation of optimal lens system parameters
NASA Astrophysics Data System (ADS)
Petković, Dalibor; Pavlović, Nenad T.; Shamshirband, Shahaboddin; Mat Kiah, Miss Laiha; Badrul Anuar, Nor; Idna Idris, Mohd Yamani
2014-04-01
Due to the popularization of digital technology, the demand for high-quality digital products has become critical. The quantitative assessment of image quality is an important consideration in any type of imaging system. Therefore, developing a design that combines the requirements of good image quality is desirable. Lens system design represents a crucial factor for good image quality. Optimization procedure is the main part of the lens system design methodology. Lens system optimization is a complex non-linear optimization task, often with intricate physical constraints, for which there is no analytical solutions. Therefore lens system design provides ideal problems for intelligent optimization algorithms. There are many tools which can be used to measure optical performance. One very useful tool is the spot diagram. The spot diagram gives an indication of the image of a point object. In this paper, one optimization criterion for lens system, the spot size radius, is considered. This paper presents new lens optimization methods based on adaptive neuro-fuzzy inference strategy (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated.
Computational approaches to parameter estimation and model selection in immunology
NASA Astrophysics Data System (ADS)
Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.
2005-12-01
One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.
A clustering approach for estimating parameters of a profile hidden Markov model.
Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz
2013-01-01
A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
NASA Astrophysics Data System (ADS)
Bowong, Samuel; Kurths, Jurgen
2010-10-01
We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
An adaptive-performance-seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system is discussed. This paper presents flight- and ground-test evaluations of the propulsion system parameter-estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Kalman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation especially in trends between the performance seeking control estimated and measured thrust.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.
1999-01-01
Radar data from vegetated land surfaces depend on many structural and compositional parameters describing the terrain. Because early, noninterferometric radar systems usually constituted an insufficient observation set from which to estimate parameters of the terrain, statistical regression techniques were used which incorporated some level of apriori knowledge or field measurements. With the advent of radar interferometry and polarimetric interferometry, potentially at multiple baselines, the observation set is now approaching that required to quantitatively estimate the parameters describing a vegetated land surface. Quantitative estimation entails formulating a physical scattering model relating the radar observations to the vegetation and surface parameters on which they depend. This paper describes the physics of candidate scattering models, and shows how the models determine the estimable parameter set. It also indicates the measurement accuracy of parameters such as vegetation height, height-to-base-of-live-crown, and surface topography with multibaseline polarimetric interferometry.
Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables
Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter
2010-10-15
We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.
Estimating cotton growth and developmental parameters through remote sensing
NASA Astrophysics Data System (ADS)
Reddy, K. Raja; Zhao, Duli; Kakani, Vijaya Gopal; Read, John J.; Sailaja, K.
2004-01-01
Three field experiments of nitrogen (N) rates, plant growth regulator (PIX) applications, and irrigation regimes were conducted in 2001 and 2002 to investigate relationships between hyperspectral reflectance (400-2500 nm) and cotton (Gossypium hirsutum L.) growth, physiology, and yield. Leaf and canopy spectral reflectance and leaf N concentration were measured weekly or biweekly during the growing season. Plant height, mainstem nodes, leaf area, and aboveground biomass were also determined by harvesting 1-m row plants in each plot at different growth stages. Cotton seed and lint yields were obtained by mechanical harvest. From canopy hyperspectral reflectance data, several reflectance indices, including simple ratio (SR) and normalized difference vegetation index (NDVI), were calculated. Linear relationships were found between leaf N concentration and a ratio of leaf reflectance at wavelengths 517 and 413 nm (R517/R413) (r2 = 0.70, n = 150). Nitrogen deficiency significantly increased leaf and canopy reflectance in the visible range. Plant height and mainstem nodes were related closely to a SR (R750/R550) according to either a logarithmic or linear function (r2 = 0.63~0.68). The relationships between LAI or biomass and canopy reflectance could be expressed in an exponential fashion with the SR or NDVI [(R935-R661)/(R935+R661)] (r2 = 0.67~0.78). Lint yields were highly correlated with the NDVI around the first flower stage (r2 = 0.64). Therefore, leaf reflectance ratio of R517/R413 may be used to estimate leaf N concentration. The NDVI around first flower stage may provide a useful tool to predict lint yield in cotton.
Bayesian estimation of regularization parameters for deformable surface models
Cunningham, G.S.; Lehovich, A.; Hanson, K.M.
1999-02-20
In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.
Estimation of crop parameters using multi-temporal optical and radar polarimetric satellite data
NASA Astrophysics Data System (ADS)
Betbeder, Julie; Fieuzal, Remy; Philippets, Yannick; Ferro-Famil, Laurent; Baup, Frederic
2015-10-01
This paper is concerned with the estimation of wheat and rapeseed crops parameters (height, leaf area index and dry biomass), during their whole vegetation cycle, using satellite time series both acquired in optical and microwave domains. Crop monitoring at a fine scale represents an important stake from an environmental point of view as it provides essential information to combine increase of production and sustainable management of agricultural landscapes. The aim of this paper is to compare the potential of optical and SAR parameters (backscattering coefficients and polarimetric parameters) for crop parameters estimation. Satellite (Formosat-2, Spot-4/5 and Radarsat-2) and ground data were acquired during the MCM'10 experiment conducted by the CESBIO laboratory in 2010. A vegetation index was derived from the optical images: the NDVI and backscattering coefficients and polarimetric parameters were computed from Radarsat-2 images. Results of this study show the high interest of using SAR parameters (backscattering coefficients and polarimetric parameters) for crop parameters estimation during the whole vegetation cycle instead of using optical vegetation index. Polarimetric parameters do not improve wheat parameters estimation (e.g. backscattering coefficient σ° VV corresponds to the best parameter for wheat height estimation (r2 = 0.60)) but show their high potential for rapeseed height and dry biomass monitoring (i.e. Shannon Entropy polarimetry (SEp ; r2 = 0.70) and Radar Vegetation Index (RVI ; r2 = 0.80) respectively).
ESTIMATION OF RELATIVISTIC ACCRETION DISK PARAMETERS FROM IRON LINE EMISSION
V. PARIEV; B. BROMLEY; W. MILLER
2001-03-01
The observed iron K{alpha} fluorescence lines in Seyfert I galaxies provide strong evidence for an accretion disk near a supermassive black hole as a source of the emission. Here we present an analysis of the geometrical and kinematic properties of the disk based on the extreme frequency shifts of a line profile as determined by measurable flux in both the red and blue wings. The edges of the line are insensitive to the distribution of the X-ray flux over the disk, and hence provide a robust alternative to profile fitting of disk parameters. Our approach yields new, strong bounds on the inclination angle of the disk and the location of the emitting region. We apply our method to interpret observational data from MCG-6-30-15 and find that the commonly assumed inclination 30{degree} for the accretion disk in MCG-6-30-15 is inconsistent with the position of the blue edge of the line at a 3{sigma} level. A thick turbulent disk model or the presence of highly ionized iron may reconcile the bounds on inclination from the line edges with the full line profile fits based on simple, geometrically thin disk models. The bounds on the innermost radius of disk emission indicate that the black hole in MCG-6-30-15 is rotating faster than 30% of theoretical maximum. When applied to data from NGC 4151, our method gives bounds on the inclination angle of the X-ray emitting inner disk of 50 {+-} 10{degree}, consistent with the presence of an ionization cone grazing the disk as proposed by Pedlar et al. (1993). The frequency extrema analysis also provides limits to the innermost disk radius in another Seyfert 1 galaxy, NGC 3516, and is suggestive of a thick disk model.
Algebraic parameters identification of DC motors: methodology and analysis
NASA Astrophysics Data System (ADS)
Becedas, J.; Mamani, G.; Feliu, V.
2010-10-01
A fast, non-asymptotic, algebraic parameter identification method is applied to an uncertain DC motor to estimate the uncertain parameters: viscous friction coefficient and inertia. In this work, the methodology is developed and analysed, its convergence, a comparative study between the traditional recursive least square method and the algebraic identification method is carried out, and an analysis of the estimator in a noisy system is presented. Computer simulations were carried out to validate the suitability of the identification algorithm.
NASA Astrophysics Data System (ADS)
Morton, D.; Bolton, W. R.; Endalamaw, A. M.; Young, J. M.; Hinzman, L. D.
2014-12-01
As part of a study on how vegetation water use and permafrost dynamics impact stream flow in the boreal forest discontinuous permafrost zone, a Bayesian modeling framework has been developed to assess the effect of parameter uncertainties in an integrated vegetation water use and simple, first-order, non-linear hydrological model. Composed of a front-end Bayes driver and a backend interactive hydrological model, the system is meant to facilitate rapid execution of seasonal simulations driven by hundreds to thousands of parameter variations to analyze the sensitivity of the system to a varying parameter space in order to derive more effective parameterizations for larger-scale simulations. The backend modeling component provides an Application Programming Interface (API) for introducing parameters in the form of constant or time-varying scalars or spatially distributed grids. In this work, we describe the basic structure of the flexible, object-oriented modeling system and test its performance against collected basin data from headwater catchments of varying permafrost extent and ecosystem structure (deciduous versus coniferous vegetation). We will also analyze model and sub-model (evaporation, transpiration, precipitation and streamflow) sensitivity to parameters through application of the system to two catchment basins of the Caribou-Poker Creeks Research Watershed (CPCRW) located in Interior Alaska. The C2 basin is a mostly permafrost-free, south facing catchment dominated by deciduous vegetation. The C3 basin is underlain by more than 50% permafrost and is dominated by coniferous vegetation. The ultimate goal of the modeling system is to improve parameterizations in mesoscale hydrologic models, and application of the HYPE system to the well-instrumented CPCRW provides a valuable opportunity for experimentation.
Synchronization-based approach for estimating all model parameters of chaotic systems
NASA Astrophysics Data System (ADS)
Konnur, Rahul
2003-02-01
The problem of dynamic estimation of all parameters of a model representing chaotic and hyperchaotic systems using information from a scalar measured output is solved. The variational calculus based method is robust in the presence of noise, enables online estimation of the parameters and is also able to rapidly track changes in operating parameters of the experimental system. The method is demonstrated using the Lorenz, Rossler chaos, and hyperchaos models. Its possible application in decoding communications using chaos is discussed.
Use of timesat to estimate phenological parameters in Northwestern Patagonia
NASA Astrophysics Data System (ADS)
Oddi, Facundo; Minotti, Priscilla; Ghermandi, Luciana; Lasaponara, Rosa
2015-04-01
Under a global change context, ecosystems are receiving high pressure and the ecology science play a key role for monitoring and assessment of natural resources. To achieve an effective resources management to develop an ecosystem functioning knowledge based on spatio-temporal perspective is useful. Satellite imagery periodically capture the spectral response of the earth and remote sensing have been widely utilized as classification and change detection tool making possible evaluate the intra and inter-annual plant dynamics. Vegetation spectral indices (e.g., NDVI) are particularly suitable to study spatio-temporal processes related to plant phenology and remote sensing specific software, such as TIMESAT, has been developed to carry out time series analysis of spectral indexes. We used TIMESAT software applied to series of 25 years of NDVI bi-monthly composites (240 images covering the period 1982-2006) from the NOAA-AVHRR sensor (8 x 8 km) to assessment plant pheonology over 900000 ha of shrubby-grasslands in the Northwestern of Patagonia, Argentina. The study area corresponds to a Mediterranean environment and is part of a gradient defined by a sharp drop west-east in the precipitation regime (600 mm to 280 mm). We fitted the temporal series of NDVI data to double logistic functions by least-squares methods evaluating three seasonality parameters: a) start of growing season, b) growing season length, c) NDVI seasonal integral. According to fitted models by TIMESAT, start average of growing season was the second half of September (± 10 days) with beginnings latest in the east (dryer areas). The average growing season length was 180 days (± 15 days) without a clear spatial trend. The NDVI seasonal integral showed a clear trend of decrease in west-east direction following the precipitation gradient. The temporal and spatial information allows revealing important patterns of ecological interest, which can be of great importance to environmental monitoring. In this
Technology Transfer Automated Retrieval System (TEKTRAN)
The vegetation opacity parameter is a key input needed to map surface soil moisture and other landsurface properties to brightness temperature. An integrated approach to estimating vegetation and soil moisture may provide a better soil moisture estimate than relying on opacity estimates from visible...
The Impact of Fallible Item Parameter Estimates on Latent Trait Recovery
ERIC Educational Resources Information Center
Cheng, Ying; Yuan, Ke-Hai
2010-01-01
In this paper we propose an upward correction to the standard error (SE) estimation of theta[subscript ML], the maximum likelihood (ML) estimate of the latent trait in item response theory (IRT). More specifically, the upward correction is provided for the SE of theta[subscript ML] when item parameter estimates obtained from an independent pretest…
NASA Astrophysics Data System (ADS)
Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.
2015-08-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model.
ERIC Educational Resources Information Center
Ankenmann, Robert D.; Stone, Clement A.
Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Fanselow, J. L.
1987-01-01
This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.
NASA Astrophysics Data System (ADS)
Sovers, O. J.; Fanselow, J. L.
1987-12-01
This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Online Vegetation Parameter Estimation in Passive Microwave Regime for Soil Moisture Estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
Remote sensing observations in the passive microwave regime can be used to estimate surface soil moisture over land at global and regional scales. Soil moisture is important to applications such as weather forecasting, climate and agriculture. One approach to estimating soil moisture from remote sen...
NASA Astrophysics Data System (ADS)
Ngo, Viet V.; Gerke, Horst H.; Badorreck, Annika
2014-05-01
The estimability analysis has been proposed to improve the quality of parameter optimization. For field data, wetting and drying processes may complicate optimization of soil hydraulic parameters. The objectives of this study were to apply estimability analysis for improving optimization of soil hydraulic parameters and compare models with and without considering hysteresis. Soil water pressure head data of a field irrigation experiment were used. The one-dimensional vertical water movement in variably-saturated soil was described with the Richards equation using the HYDRUS-1D code. Estimability of the unimodal van Genuchten - Mualem hydraulic model parameters as well as of the hysteretic parameter model of Parker and Lenhard was classified according to a sensitivity coefficient matrix. The matrix was obtained by sequentially calculating effects of initial parameter variations on changes in the simulated pressure head values. Optimization was carried out by means of the Levenberg-Marquardt method as implemented in the HYDRUS-1D code. The parameters α, Ks, θs, and n in the nonhysteretic model were found sensitive and parameter θs and n strongly correlated with parameter n in the nonhysteretic model. When assuming hysteresis, the estimability was highest for αw and decreased with soil depth for Ks and αd, and increased for θs and n. The hysteretic model could approximate the pressure heads in the soil by considering parameters from wetting and drying periods separately as initial estimates. The inverse optimization could be carried out more efficiently with most estimable parameters. Despite the weaknesses of the local optimization algorithm and the inflexibility of the unimodal van Genuchten model, the results suggested that estimability analysis could be considered as a guidance to better define the optimization scenarios and then improved the determination of soil hydraulic parameters.
Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.
Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash
2014-03-01
One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. PMID:24384542
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm
NASA Astrophysics Data System (ADS)
Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.
2016-03-01
A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO-ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO-ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO-ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO-ACO is a very powerful tool for parameter estimation with high accuracy and low deviations.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353
Parameter Estimation for a crop model: separate and joint calibration of soil and plant parameters
NASA Astrophysics Data System (ADS)
Hildebrandt, A.; Jackisch, C.; Luis, S.
2008-12-01
Vegetation plays a major role both in the atmospheric and terrestrial water cycle. A great deal of vegetation cover in the developed world consists of agricultural used land (i.e. 44 % of the territory of the EU). Therefore, crop models have become increasingly prominent for studying the impact of Global Change both on economic welfare as well as on influence of vegetation on climate, and feedbacks with hydrological processes. By doing so, it is implied that crop models properly reflect the soil water balance and vertical exchange with the atmosphere. Although crop models can be incorporated in Surface Vegetation Atmosphere Transfer Schemes for that purpose, their main focus has traditionally not been on predicting water and energy fluxes, but yield. In this research we use data from two lysimeters in Brandis (Saxony, Germany), which have been planted with the crops of the surrounding farm, to test the capability of the crop model in SWAP. The lysimeters contain different natural soil cores, leading to substantially different yield. This experiment gives the opportunity to test, if the crop model is portable - that is if a calibrated crop can be moved between different locations. When using the default parameters for the respective environment, the model does neither quantitatively nor qualitatively reproduce the difference in yield and LAI for the different lysimeters. The separate calibration of soil and plant parameter was poor compared to the joint calibration of plant and soil parameters. This suggests that the model is not portable, but needs to be calibrated for individual locations, based on measurements or expert knowledge.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Farr, Benjamin; Ochsner, Evan; Cho, Hee-Suk; Raymond, V.; Kim, Chunglee; Lee, Chang-Hwan
2014-05-01
Precessing black hole-neutron star (BH-NS) binaries produce a rich gravitational wave signal, encoding the binary's nature and inspiral kinematics. Using the lalinference_mcmc Markov chain Monte Carlo parameter estimation code, we use two fiducial examples to illustrate how the geometry and kinematics are encoded into the modulated gravitational wave signal, using coordinates well adapted to precession. Extending previous work, we demonstrate that the performance of detailed parameter estimation studies can often be estimated by "effective" studies: comparisons of a prototype signal with its nearest neighbors, adopting a fixed sky location and idealized two-detector network. Using a concrete example, we show that higher harmonics provide nonzero but small local improvement when estimating the parameters of precessing BH-NS binaries. We also show that higher harmonics can improve parameter estimation accuracy for precessing binaries by breaking leading-order discrete symmetries and thus ruling out approximately degenerate source orientations. Our work illustrates quantities gravitational wave measurements can provide, such as the orientation of a precessing short gamma ray burst progenitor relative to the line of sight. More broadly, "effective" estimates may provide a simple way to estimate trends in the performance of parameter estimation for generic precessing BH-NS binaries in next-generation detectors. For example, our results suggest that the orbital chirp rate, precession rate, and precession geometry are roughly independent observables, defining natural variables to organize correlations in the high-dimensional BH-NS binary parameter space.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
NASA Astrophysics Data System (ADS)
Zhu, Binqi; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao
2016-04-01
A computational method for suppressing clutter and generating clear microwave images of targets is proposed in this paper, which combines synthetic aperture radar (SAR) principles with recursive method and waveform design theory, and it is suitable for SAR for special applications. The nonlinear recursive model is introduced into the SAR operation principle, and the cubature Kalman filter algorithm is used to estimate target and clutter responses in each azimuth position based on their previous states, which are both assumed to be Gaussian distributions. NP criteria-based optimal waveforms are designed repeatedly as the sensor flies along its azimuth path and are used as the transmitting signals. A clutter suppression filter is then designed and added to suppress the clutter response while maintaining most of the target response. Thus, with fewer disturbances from the clutter response, we can generate the SAR image with traditional azimuth matched filters. Our simulations show that the clutter suppression filter significantly reduces the clutter response, and our algorithm greatly improves the SINR of the SAR image based on different clutter suppression filter parameters. As such, this algorithm may be preferable for special target imaging when prior information on the target is available.
Motion parameter estimation of multiple ground moving targets in multi-static passive radar systems
NASA Astrophysics Data System (ADS)
Subedi, Saurav; Zhang, Yimin D.; Amin, Moeness G.; Himed, Braham
2014-12-01
Multi-static passive radar (MPR) systems typically use narrowband signals and operate under weak signal conditions, making them difficult to reliably estimate motion parameters of ground moving targets. On the other hand, the availability of multiple spatially separated illuminators of opportunity provides a means to achieve multi-static diversity and overall signal enhancement. In this paper, we consider the problem of estimating motion parameters, including velocity and acceleration, of multiple closely located ground moving targets in a typical MPR platform with focus on weak signal conditions, where traditional time-frequency analysis-based methods become unreliable or infeasible. The underlying problem is reformulated as a sparse signal reconstruction problem in a discretized parameter search space. While the different bistatic links have distinct Doppler signatures, they share the same set of motion parameters of the ground moving targets. Therefore, such motion parameters act as a common sparse support to enable the exploitation of group sparsity-based methods for robust motion parameter estimation. This provides a means of combining signal energy from all available illuminators of opportunity and, thereby, obtaining a reliable estimation even when each individual signal is weak. Because the maximum likelihood (ML) estimation of motion parameters involves a multi-dimensional search and its performance is sensitive to target position errors, we also propose a technique that decouples the target motion parameters, yielding a two-step process that sequentially estimates the acceleration and velocity vectors with a reduced dimensionality of the parameter search space. We compare the performance of the sequential method against the ML estimation with the consideration of imperfect knowledge of the initial target positions. The Cramér-Rao bound (CRB) of the underlying parameter estimation problem is derived for a general multiple-target scenario in an MPR system
EEG and MEG source localization using recursively applied (RAP) MUSIC
Mosher, J.C.; Leahy, R.M.
1996-12-31
The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.
About accuracy of the discrimination parameter estimation for the dual high-energy method
NASA Astrophysics Data System (ADS)
Osipov, S. P.; Chakhlov, S. V.; Osipov, O. S.; Shtein, A. M.; Strugovtsev, D. V.
2015-04-01
A set of the mathematical formulas to estimate the accuracy of discrimination parameters for two implementations of the dual high energy method - by the effective atomic number and by the level lines is given. The hardware parameters which influenced on the accuracy of the discrimination parameters are stated. The recommendations to form the structure of the high energy X-ray radiation impulses are formulated. To prove the applicability of the proposed procedure there were calculated the statistical errors of the discrimination parameters for the cargo inspection system of the Tomsk polytechnic university on base of the portable betatron MIB-9. The comparison of the experimental estimations and the theoretical ones of the discrimination parameter errors was carried out. It proved the practical applicability of the algorithm to estimate the discrimination parameter errors for the dual high energy method.
Test models for improving filtering with model errors through stochastic parameter estimation
Gershgorin, B.; Harlim, J. Majda, A.J.
2010-01-01
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
NASA Technical Reports Server (NTRS)
Chin, M. M.; Goad, C. C.; Martin, T. V.
1972-01-01
A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.