Recursive parameter estimation of hydrologic models
NASA Astrophysics Data System (ADS)
Rajaram, Harihar; Georgakakos, Konstantine P.
1989-02-01
Proposed is a nonlinear filtering approach to recursive parameter estimation of conceptual watershed response models in state-space form. The conceptual model state is augmented by the vector of free parameters which are to be estimated from input-output data, and the extended Kaiman filter is used to recursively estimate and predict the augmented state. The augmented model noise covariance is parameterized as the sum of two components: one due to errors in the augmented model input and another due to errors in the specification of augmented model constants that were estimated from other than input-output data (e.g., topographic and rating curve constants). These components depend on the sensitivity of the augmented model to input and uncertain constants. Such a novel parameterization allows for nonstationary model noise statistics that are consistent with the dynamics of watershed response as they are described by the conceptual watershed response model. Prior information regarding uncertainty in input and uncertain constants in the form of degree-of-belief estimates of hydrologists can be used directly within the proposed formulation. Even though model structure errors are not explicitly parameterized in the present formulation, such errors can be identified through the examination of the one-step ahead predicted normalized residuals and the parameter traces during convergence. The formulation is exemplified by the estimation of the parameters of a conceptual hydrologic model with data from the 2.1-km2 watershed of Woods Lake located in the Adirondack Mountains of New York.
Recursive stochastic subspace identification for structural parameter estimation
NASA Astrophysics Data System (ADS)
Chang, C. C.; Li, Z.
2009-03-01
Identification of structural parameters under ambient condition is an important research topic for structural health monitoring and damage identification. This problem is especially challenging in practice as these structural parameters could vary with time under severe excitation. Among the techniques developed for this problem, the stochastic subspace identification (SSI) is a popular time-domain method. The SSI can perform parametric identification for systems with multiple outputs which cannot be easily done using other time-domain methods. The SSI uses the orthogonal-triangular decomposition (RQ) and the singular value decomposition (SVD) to process measured data, which makes the algorithm efficient and reliable. The SSI however processes data in one batch hence cannot be used in an on-line fashion. In this paper, a recursive SSI method is proposed for on-line tracking of time-varying modal parameters for a structure under ambient excitation. The Givens rotation technique, which can annihilate the designated matrix elements, is used to update the RQ decomposition. Instead of updating the SVD, the projection approximation subspace tracking technique which uses an unconstrained optimization technique to track the signal subspace is employed. The proposed technique is demonstrated on the Phase I ASCE benchmark structure. Results show that the technique can identify and track the time-varying modal properties of the building under ambient condition.
Recursive estimation of 3D motion and surface structure from local affine flow parameters.
Calway, Andrew
2005-04-01
A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.
Auto-SOM: recursive parameter estimation for guidance of self-organizing feature maps.
Haese, K; Goodhill, G J
2001-03-01
An important technique for exploratory data analysis is to form a mapping from the high-dimensional data space to a low-dimensional representation space such that neighborhoods are preserved. A popular method for achieving this is Kohonen's self-organizing map (SOM) algorithm. However, in its original form, this requires the user to choose the values of several parameters heuristically to achieve good performance. Here we present the Auto-SOM, an algorithm that estimates the learning parameters during the training of SOMs automatically. The application of Auto-SOM provides the facility to avoid neighborhood violations up to a user-defined degree in either mapping direction. Auto-SOM consists of a Kalman filter implementation of the SOM coupled with a recursive parameter estimation method. The Kalman filter trains the neurons' weights with estimated learning coefficients so as to minimize the variance of the estimation error. The recursive parameter estimation method estimates the width of the neighborhood function by minimizing the prediction error variance of the Kalman filter. In addition, the "topographic function" is incorporated to measure neighborhood violations and prevent the map's converging to configurations with neighborhood violations. It is demonstrated that neighborhoods can be preserved in both mapping directions as desired for dimension-reducing applications. The development of neighborhood-preserving maps and their convergence behavior is demonstrated by three examples accounting for the basic applications of self-organizing feature maps.
The Usage of Recursive Parameter Estimation in Automated Reference Point Determination
NASA Astrophysics Data System (ADS)
Lossin, Torsten; Lösler, Michael; Neidhardt, Alexander; Lehmann, Rüdiger; Mähler, Swetlana
2014-12-01
The Geodetic Observatory Wettzell (GOW) is one of the core stations within the International Earth Rotation and Reference Systems Service (IERS). The research facility is operated by the Federal Agency for Cartography and Geodesy (Bundesamt für Kartographie und Geodäsie, BKG) and the Research Institute for Satellite Geodesy (Forschungseinrichtung Satellitengeodäsie, FESG) of the Technische Universität München (Technical University Munich, TUM). The observatory hosts several geodetic space techniques, including permanent receivers for the Global Navigation Satellite Systems (GNSS), optical telescopes for Satellite Laser Ranging (SLR), and radio telescopes for Very Long Baseline Interferometry (VLBI). To combine these techniques the geodetic reference points of each technique and therefore the relative geometries (local ties) must be known with higher-level accuracy. To enhance the reliability, the Global Geodetic Observing System (GGOS) calls for continuous measurements and automated determination of the reference points. In 2013 the monitoring system HEIMDALL was installed at the GOW to derive the reference point of one of the TWIN radio telescopes in an automated way. Thirty single epochs were carried out from March to July 2013. The results of these epochs were combined by recursive parameter estimation. The advantage of this approach is the consideration of all former results and uncertainties. These combined results enable a reliable assessment to prove the stability of the reference point of the new radio telescope.
Baysian recursive image estimation.
NASA Technical Reports Server (NTRS)
Nahi, N. E.; Assefi, T.
1972-01-01
Discussion of a statistical procedure for treatment of noise-affected images to recover unaffected images by recursive processing with noise background elimination. The feasibility of the application of a recursive linear Kalman filtering technique to image processing is demonstrated. The procedure is applicable to images which are characterized statistically by mean and correlation functions. A time invariant dynamic model is proposed to provide stationary statistics for the scanner output.
NASA Technical Reports Server (NTRS)
Choudhury, A. K.; Djalali, M.
1975-01-01
In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
NASA Astrophysics Data System (ADS)
Duong, Van-Huan; Bastawrous, Hany Ayad; Lim, KaiChin; See, Khay Wai; Zhang, Peng; Dou, Shi Xue
2015-11-01
This paper deals with the contradiction between simplicity and accuracy of the LiFePO4 battery states estimation in the electric vehicles (EVs) battery management system (BMS). State of charge (SOC) and state of health (SOH) are normally obtained from estimating the open circuit voltage (OCV) and the internal resistance of the equivalent electrical circuit model of the battery, respectively. The difficulties of the parameters estimation arise from their complicated variations and different dynamics which require sophisticated algorithms to simultaneously estimate multiple parameters. This, however, demands heavy computation resources. In this paper, we propose a novel technique which employs a simplified model and multiple adaptive forgetting factors recursive least-squares (MAFF-RLS) estimation to provide capability to accurately capture the real-time variations and the different dynamics of the parameters whilst the simplicity in computation is still retained. The validity of the proposed method is verified through two standard driving cycles, namely Urban Dynamometer Driving Schedule and the New European Driving Cycle. The proposed method yields experimental results that not only estimated the SOC with an absolute error of less than 2.8% but also characterized the battery model parameters accurately.
Recursive modular modelling methodology for lumped-parameter dynamic systems.
Orsino, Renato Maia Matarazzo
2017-08-01
This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.
Recursive Bayesian electromagnetic refractivity estimation from radar sea clutter
NASA Astrophysics Data System (ADS)
Vasudevan, Sathyanarayanan; Anderson, Richard H.; Kraut, Shawn; Gerstoft, Peter; Rogers, L. Ted; Krolik, Jeffrey L.
2007-04-01
Estimation of the range- and height-dependent index of refraction over the sea surface facilitates prediction of ducted microwave propagation loss. In this paper, refractivity estimation from radar clutter returns is performed using a Markov state space model for microwave propagation. Specifically, the parabolic approximation for numerical solution of the wave equation is used to formulate the refractivity from clutter (RFC) problem within a nonlinear recursive Bayesian state estimation framework. RFC under this nonlinear state space formulation is more efficient than global fitting of refractivity parameters when the total number of range-varying parameters exceeds the number of basis functions required to represent the height-dependent field at a given range. Moreover, the range-recursive nature of the estimator can be easily adapted to situations where the refractivity modeling changes at discrete ranges, such as at a shoreline. A fast range-recursive solution for obtaining range-varying refractivity is achieved by using sequential importance sampling extensions to state estimation techniques, namely, the forward and Viterbi algorithms. Simulation and real data results from radar clutter collected off Wallops Island, Virginia, are presented which demonstrate the ability of this method to produce propagation loss estimates that compare favorably with ground truth refractivity measurements.
Recursive estimation of prior probabilities using the mixture approach
NASA Technical Reports Server (NTRS)
Kazakos, D.
1974-01-01
The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable.
COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES
K. MUSKE; J. HOWSE
2000-09-01
This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.
Recursive bias estimation for high dimensional smoothers
Hengartner, Nicolas W; Matzner-lober, Eric; Cornillon, Pierre - Andre
2008-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Vision-based recursive estimation of rotorcraft obstacle locations
NASA Technical Reports Server (NTRS)
Leblanc, D. J.; Mcclamroch, N. H.
1992-01-01
The authors address vision-based passive ranging during nap-of-the-earth (NOE) rotorcraft flight. They consider the problem of estimating the relative location of identifiable features on nearby obstacles, assuming a sequence of noisy camera images and imperfect measurements of the camera's translation and rotation. An iterated extended Kalman filter is used to provide recursive range estimation. The correspondence problem is simplified by predicting and tracking each feature's image within the Kalman filter framework. Simulation results are presented which show convergent estimates and generally successful feature point tracking. Estimation performance degrades for features near the optical axis and for accelerating motions. Image tracking is also sensitive to angular rate.
A Precision Recursive Estimate for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B.
1980-01-01
A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.
Recursive Estimation for the Tracking of Radioactive Sources
Howse, J.W.; Muske, K.R.; Ticknor, L.O.
1999-02-01
This paper describes a recursive estimation algorithm used for tracking the physical location of radioactive sources in real-time as they are moved around in a facility. The al- gorithm is a nonlinear least squares estimation that mini- mizes the change in, the source location and the deviation between measurements and model predictions simultane- ously. The measurements used to estimate position consist of four count rates reported by four different gamma ray de tectors. There is an uncertainty in the source location due to the variance of the detected count rate. This work repre- sents part of a suite of tools which will partially automate security and safety assessments, allow some assessments to be done remotely, and provide additional sensor modalities with which to make assessments.
Recursive estimation for the tracking of radioactive sources
Howse, J.W.; Ticknor, L.O.; Muske, K.R.
1998-12-31
This paper describes a recursive estimation algorithm used for tracking the physical location of radioactive sources in real-time as they are moved around in a facility. The algorithm is related to a nonlinear least squares estimation that minimizes the change in the source location and the deviation between measurements and model predictions simultaneously. The measurements used to estimate position consist of four count rates reported by four different gamma ray detectors. There is an uncertainty in the source location due to the large variance of the detected count rate. This work represents part of a suite of tools which will partially automate security and safety assessments, allow some assessments to be done remotely, and provide additional sensor modalities with which to make assessments.
Recursive least squares approach to calculate motion parameters for a moving camera
NASA Astrophysics Data System (ADS)
Chang, Samuel H.; Fuller, Joseph; Farsaie, Ali; Elkins, Les
2003-10-01
The increase in quality and the decrease in price of digital camera equipment have led to growing interest in reconstructing 3-dimensional objects from sequences of 2-dimensional images. The accuracy of the models obtained depends on two sets of parameter estimates. The first is the set of lens parameters - focal length, principal point, and distortion parameters. The second is the set of motion parameters that allows the comparison of a moving camera"s desired location to a theoretical location. In this paper, we address the latter problem, i.e. the estimation of the set of 3-D motion parameters from data obtained with a moving camera. We propose a method that uses Recursive Least Squares for camera motion parameter estimation with observation noise. We accomplish this by calculation of hidden information through camera projection and minimization of the estimation error. We then show how a filter based on the motion parameters estimates may be designed to correct for the errors in the camera motion. The validity of the approach is illustrated by the presentation of experimental results obtained using the methods described in the paper.
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.
NASA Astrophysics Data System (ADS)
Ni, Zhiyu; Mu, Ruinan; Xun, Guangbin; Wu, Zhigang
2016-01-01
The rotation of spacecraft flexible appendage may cause changes in modal parameters. For this time-varying system, the computation cost of the frequently-used singular value decomposition (SVD) identification method is high. Some control problems, such as the self-adaptive control, need the latest modal parameters to update the controller parameters in time. In this paper, the projection approximation subspace tracking (PAST) recursive algorithm is applied as an alternative method to identify the time-varying modal parameters. This method avoids the SVD by signal subspace projection and improves the computational efficiency. To verify the ability of this recursive algorithm in spacecraft modal parameters identification, a spacecraft model with rapid rotational appendage, Soil Moisture Active/Passive (SMAP) satellite, is established, and the time-varying modal parameters of the satellite are identified recursively by designing the input and output signals. The results illustrate that this recursive algorithm can obtain the modal parameters in the high signal noise ratio (SNR) and it has better computational efficiency than the SVD method. Moreover, to improve the identification precision of this recursive algorithm in the low SNR, the wavelet de-noising technology is used to decrease the effect of noises.
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2009-12-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Recursive starlight and bias estimation for high-contrast imaging with an extended Kalman filter
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.
2016-01-01
For imaging faint exoplanets and disks, a coronagraph-equipped observatory needs focal plane wavefront correction to recover high contrast. The most efficient correction methods iteratively estimate the stellar electric field and suppress it with active optics. The estimation requires several images from the science camera per iteration. To maximize the science yield, it is desirable both to have fast wavefront correction and to utilize all the correction images for science target detection. Exoplanets and disks are incoherent with their stars, so a nonlinear estimator is required to estimate both the incoherent intensity and the stellar electric field. Such techniques assume a high level of stability found only on space-based observatories and possibly ground-based telescopes with extreme adaptive optics. In this paper, we implement a nonlinear estimator, the iterated extended Kalman filter (IEKF), to enable fast wavefront correction and a recursive, nearly-optimal estimate of the incoherent light. In Princeton's High Contrast Imaging Laboratory, we demonstrate that the IEKF allows wavefront correction at least as fast as with a Kalman filter and provides the most accurate detection of a faint companion. The nonlinear IEKF formalism allows us to pursue other strategies such as parameter estimation to improve wavefront correction.
Recursive phase estimation with a spatial radar carrier
NASA Astrophysics Data System (ADS)
Garcia-Marquez, Jorge; Servin Guirado, Manuel; Paez, Gonzalo; Malacara-Hernandez, Daniel
1999-08-01
An interferogram can be demodulated to find the wavefront shape if a radial carrier is introduced. The phase determination is made in the space domain, but the low-pass filter characteristics must be properly chosen. One disadvantage of this method is the possible removal of some frequencies from the central lobe, resulting in a misinterpretation of the true phase. Nevertheless isolating the central order by using a recursive method when a radial carrier reference is used is possible. An example of a recovered phase from a simulated interferogram is shown.
Recursive bias estimation for high dimensional regression smoothers
Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric
2009-01-01
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Raymond L. Czaplewski
2010-01-01
Numerous government surveys of natural resources use Post-Stratification to improve statistical efficiency, where strata are defined by full-coverage, remotely sensed data and geopolitical boundaries. Recursive Restriction Estimation, which may be considered a special case of the static Kalman filter, is an attractive alternative. It decomposes a complex estimation...
The recursive maximum likelihood proportion estimator: User's guide and test results
NASA Technical Reports Server (NTRS)
Vanrooy, D. L.
1976-01-01
Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Recursive camera-motion estimation with the trifocal tensor.
Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen; Or, Siu Hang
2006-10-01
In this paper, an innovative extended Kalman filter (EKF) algorithm for pose tracking using the trifocal tensor is proposed. In the EKF, a constant-velocity motion model is used as the dynamic system, and the trifocal-tensor constraint is incorporated into the measurement model. The proposed method has the advantages of those structure- and-motion-based approaches in that the pose sequence can be computed with no prior information on the scene structure. It also has the strengths of those model-based algorithms in which no updating of the three-dimensional (3-D) structure is necessary in the computation. This results in a stable, accurate, and efficient algorithm. Experimental results show that the proposed approach outperformed other existing EKFs that tackle the same problem. An extension to the pose-tracking algorithm has been made to demonstrate the application of the trifocal constraint to fast recursive 3-D structure recovery.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
NASA Astrophysics Data System (ADS)
Yadmellat, Peyman; Nikravesh, S. Kamaleddin Yadavar
2011-01-01
In this paper, a recursive delayed output-feedback control strategy is considered for stabilizing unstable periodic orbit of unknown nonlinear chaotic systems. An unknown nonlinearity is directly estimated by a linear-in-parameter neural network which is then used in an observer structure. An on-line modified back propagation algorithm with e-modification is used to update the weights of the network. The globally uniformly ultimately boundedness of overall closed-loop system response is analytically ensured using Razumikhin lemma. To verify the effectiveness of the proposed observer-based controller, a set of simulations is performed on a Rossler system in comparison with several previous methods.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
Prior estimation of motion using recursive perceptron with sEMG: a case of wrist angle.
Kuroda, Yoshihiro; Tanaka, Takeshi; Imura, Masataka; Oshiro, Osamu
2012-01-01
Muscle activity is followed by myoelectric potentials. Prior estimation of motion by surface electromyography can be utilized to assist the physically impaired people as well as surgeon. In this paper, we proposed a real-time method for the prior estimation of motion from surface electromyography, especially in the case of wrist angle. The method was based on the recursive processing of multi-layer perceptron, which is trained quickly. A single layer perceptron calculates quasi tensional force of muscles from surface electromyography. A three-layer perceptron calculates the wrist's change in angle. In order to estimate a variety of motions properly, the perceptron was designed to estimate motion in a short time period, e.g. 1ms. Recursive processing enables the method to estimate motion in the target time period, e.g. 50ms. The results of the experiments showed statistical significance for the precedence of estimated angle to the measured one.
Parameter adaptive estimation of random processes
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Vanlandingham, H. F.
1975-01-01
This paper is concerned with the parameter adaptive least squares estimation of random processes. The main result is a general representation theorem for the conditional expectation of a random variable on a product probability space. Using this theorem along with the general likelihood ratio expression, the least squares estimate of the process is found in terms of the parameter conditioned estimates. The stochastic differential for the a posteriori probability and the stochastic differential equation for the a posteriori density are found by using simple stochastic calculus on the representations obtained. The results are specialized to the case when the parameter has a discrete distribution. The results can be used to construct an implementable recursive estimator for certain types of nonlinear filtering problems. This is illustrated by some simple examples.
Recursive identification and tracking of parameters for linear and nonlinear multivariable systems
NASA Technical Reports Server (NTRS)
Sidar, M.
1975-01-01
The problem of identifying constant and variable parameters in multi-input, multi-output, linear and nonlinear systems is considered, using the maximum likelihood approach. An iterative algorithm, leading to recursive identification and tracking of the unknown parameters and the noise covariance matrix, is developed. Agile tracking, and accurate and unbiased identified parameters are obtained. Necessary conditions for a globally, asymptotically stable identification process are provided; the conditions proved to be useful and efficient. Among different cases studied, the stability derivatives of an aircraft were identified and some of the results are shown as examples.
Lee, W.Y.; Park, C.; Kelly, G.E.
1996-11-01
A scheme for detecting faults in an air-handling unit using residual and parameter identification methods is presented. Faults can be detected by comparing the normal or expected operating condition data with the abnormal, measured data using residuals. Faults can also be detected by examining unmeasurable parameter changes in a model of a controlled system using a system parameter identification technique. In this study, autoregressive moving average with exogenous input (ARMAX) and autoregressive with exogenous input (ARX) models with both single-input/single-output (SISO) and multi-input/single-output (MISO) structures are examined. Model parameters are determined using the Kalman filter recursive identification method. This approach is tested using experimental data from a laboratory`s variable-air-volume (VAV) air-handling unit operated with and without faults.
Recursive Estimation of the Stein Center of SPD Matrices & its Applications.
Salehian, Hesamoddin; Cheng, Guang; Vemuri, Baba C; Ho, Jeffrey
2013-12-01
Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negatively-curved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance - which is the square root of the LogDet divergence - that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable comparisons to other competing methods.
Phenological Parameters Estimation Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.
2010-01-01
The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE
NASA Astrophysics Data System (ADS)
El-Hawary, Ferial
1989-09-01
This paper treats the problem of source dynamic motion evaluation in underwater applications using recursive weighted least squares estimation. The issue of compensating for underwater motion effects arises in a number of areas of current interest such as control and operations of autonomous remotely operated vehicles, underwater seismic exploration, and buoy wave data analysis. Earlier treatments of the problem relied on frequency response methods and Kalman filtering. The present paper discusses the compensation problem using an alternative discrete model of the process and proposes use of the recursive weighted least squares algorithm for its solution. The algorithm is simpler than Kalman filtering in terms of the required knowledge of noise statistics and provides an attractive alternative to Kalman Filtering. Emphasis is given practical implementation using parallel processing and systolic array methodologies.
NASA Astrophysics Data System (ADS)
Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong
2016-07-01
This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.
Parameter estimation of hydrologic models using data assimilation
NASA Astrophysics Data System (ADS)
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Recursive Parameter Identification for Estimating and Displaying Maneuvering Vessel Path
2003-12-01
display system (ECDIS) capabilities on naval vessels in an effort to eliminate paper charts and reduce bridge team manpower requirements. Due to unique...Sensor System Interface (NAVSSI) Diagram 2 Although a major improvement over paper charting, NAVSSI also serves as a technology inroad for implementing...identify the dynamic model based on control inputs and observed response. The NAVSSI system is well suited for this function because it already serves
NASA Astrophysics Data System (ADS)
Li, Lei; Yang, Kecheng; Li, Wei; Wang, Wanyan; Guo, Wenping; Xia, Min
2016-07-01
Conventional regularization methods have been widely used for estimating particle size distribution (PSD) in single-angle dynamic light scattering, but they could not be used directly in multiangle dynamic light scattering (MDLS) measurements for lack of accurate angular weighting coefficients, which greatly affects the PSD determination and none of the regularization methods perform well for both unimodal and multimodal distributions. In this paper, we propose a recursive regularization method-Recursion Nonnegative Tikhonov-Phillips-Twomey (RNNT-PT) algorithm for estimating the weighting coefficients and PSD from MDLS data. This is a self-adaptive algorithm which distinguishes characteristics of PSDs and chooses the optimal inversion method from Nonnegative Tikhonov (NNT) and Nonnegative Phillips-Twomey (NNPT) regularization algorithm efficiently and automatically. In simulations, the proposed algorithm was able to estimate the PSDs more accurately than the classical regularization methods and performed stably against random noise and adaptable to both unimodal and multimodal distributions. Furthermore, we found that the six-angle analysis in the 30-130° range is an optimal angle set for both unimodal and multimodal PSDs.
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Recursive Bayesian filtering framework for lithium-ion cell state estimation
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Gambhire, Priya; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-02-01
Robust battery management system is critical for a safe and reliable electric vehicle operation. One of the most important functions of the battery management system is to accurately estimate the battery state using minimal on-board instrumentation. This paper presents a recursive Bayesian filtering framework for on-board battery state estimation by assimilating measurables like cell voltage, current and temperature with physics-based reduced order model (ROM) predictions. The paper proposes an improved Particle filtering algorithm for implementation of the framework, and compares its performance against the unscented Kalman filter. Functionality of the proposed framework is demonstrated for a commercial NCA/C cell state estimation at different operating conditions including constant current discharge at room and low temperatures, hybrid power pulse characterization (HPPC) and urban driving schedule (UDDS) protocols. In addition to accurate voltage prediction, the electrochemical nature of ROM enables drawing of physical insights into the cell behavior. Advantages of using electrode concentrations over conventional Coulomb counting for accessible capacity estimation are discussed. In addition to the mean state estimation, the framework also provides estimation of the associated confidence bounds that are used to establish predictive capability of the proposed framework.
NASA Technical Reports Server (NTRS)
Hocking, W. K.
1989-01-01
The objective of any radar experiment is to determine as much as possible about the entities which scatter the radiation. This review discusses many of the various parameters which can be deduced in a radar experiment, and also critically examines the procedures used to deduce them. Methods for determining the mean wind velocity, the RMS fluctuating velocities, turbulence parameters, and the shapes of the scatterers are considered. Complications with these determinations are discussed. It is seen throughout that a detailed understanding of the shape and cause of the scatterers is important in order to make better determinations of these various quantities. Finally, some other parameters, which are less easily acquired, are considered. For example, it is noted that momentum fluxes due to buoyancy waves and turbulence can be determined, and on occasions radars can be used to determine stratospheric diffusion coefficients and even temperature profiles in the atmosphere.
NASA Astrophysics Data System (ADS)
van de Vyver, H.; Roulin, E.
2009-04-01
This paper presents an application of scale-recursive estimation (SRE) used to assimilate rainfall rates within a storm, estimated from the data of two remote sensing devices. These are a ground-based weather radar and a spaceborne microwave cross-track scanner. The rain rate products corresponding to the latter were provided by the EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management. In our approach, we operate directly on the data so that it is not necessary to consider a predefined multiscale model structure. We introduce a simple and computationally efficient procedure to model the variability of the rain rate process in scales. The measurement noise of the radar is estimated by comparing a large number of data sets with rain gauge data. The noise in the microwave measurements is roughly estimated by using upscaled radar data as a reference. Special emphasis is placed on the specification of the multiscale structure of precipitation under sparse or noisy data. The new methodology is compared with the latest SRE method for data fusion of multisensor precipitation estimates. Applications to the Belgian region show the relevance of the new methodology.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Quantum estimation of unknown parameters
NASA Astrophysics Data System (ADS)
Martínez-Vargas, Esteban; Pineda, Carlos; Leyvraz, François; Barberis-Blostein, Pablo
2017-01-01
We discuss the problem of finding the best measurement strategy for estimating the value of a quantum system parameter. In general the optimum quantum measurement, in the sense that it maximizes the quantum Fisher information and hence allows one to minimize the estimation error, can only be determined if the value of the parameter is already known. A modification of the quantum Van Trees inequality, which gives a lower bound on the error in the estimation of a random parameter, is proposed. The suggested inequality allows us to assert if a particular quantum measurement, together with an appropriate estimator, is optimal. An adaptive strategy to estimate the value of a parameter, based on our modified inequality, is proposed.
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
On the structural limitations of recursive digital filters for base flow estimation
NASA Astrophysics Data System (ADS)
Su, Chun-Hsu; Costelloe, Justin F.; Peterson, Tim J.; Western, Andrew W.
2016-06-01
Recursive digital filters (RDFs) are widely used for estimating base flow from streamflow hydrographs, and various forms of RDFs have been developed based on different physical models. Numerical experiments have been used to objectively evaluate their performance, but they have not been sufficiently comprehensive to assess a wide range of RDFs. This paper extends these studies to understand the limitations of a generalized RDF method as a pathway for future field calibration. Two formalisms are presented to generalize most existing RDFs, allowing systematic tuning of their complexity. The RDFs with variable complexity are evaluated collectively in a synthetic setting, using modeled daily base flow produced by Li et al. (2014) from a range of synthetic catchments simulated with HydroGeoSphere. Our evaluation reveals that there are optimal RDF complexities in reproducing base flow simulations but shows that there is an inherent physical inconsistency within the RDF construction. Even under the idealized setting where true base flow data are available to calibrate the RDFs, there is persistent disagreement between true and estimated base flow over catchments with small base flow components, low saturated hydraulic conductivity of the soil and larger surface runoff. The simplest explanation is that low base flow "signal" in the streamflow data is hard to distinguish, although more complex RDFs can improve upon the simpler Eckhardt filter at these catchments.
NASA Astrophysics Data System (ADS)
Bocchiola, D.
2007-11-01
The paper shows an application of Scale Recursive Estimation (SRE) used to assimilate rainfall rates estimated during a storm event from three remote sensing devices. These are the TMI radiometer and the PR radar, carried on board of the TRMM satellite and the KNQA Memphis Weather Surveillance radar, belonging to the NEXRAD network, each one providing rain rate estimates at a different spatial scale. The variability of rain rate process in scales is modeled as a multiplicative random cascade, including spatial intermittence. The observational noise in the estimates is modeled according to a multiplicative error. System estimation, including process and observational noise, is carried out using Maximum Likelihood Estimation implemented by a scale recursive Expectation Maximization (EM) algorithm. As a result, new rainfall rate estimates are obtained that feature decreased estimation error as compared to those coming from each device alone. The performance of the SRE-EM approach is compared with that of the latest methods proposed for data fusion of multisensor estimates. The proposed approach improves the current methods adopted for SRE and provides an alternative for data fusion in the field of precipitation.
Parameter Estimator for Engineering Systems
2016-10-13
This software model generates vehicle parameter sets for downstream modeling applications using real-world data. Value to users is provided through improvements to modeling activities as a result of decreases in time taken during model development and validation. Currently configured for transportation modeling activities, the software provides initial estimates for multiple fundamental vehicle parameters including mass, coefficient of drag, and tire rolling resistance using real-world drive and duty cycle information as inputs.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, W. M.
1977-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. FORTRAN subroutines have been developed to facilitate analyses of a variety of parameter estimation problems. Easy to use multipurpose sets of algorithms are reported that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used.
Parameter Estimation Using VLA Data
NASA Astrophysics Data System (ADS)
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B. P.
1982-01-01
PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.
Parameter estimation for transformer modeling
NASA Astrophysics Data System (ADS)
Cho, Sung Don
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients
A landscape-based cluster analysis using recursive search instead of a threshold parameter.
Gladwin, Thomas E; Vink, Matthijs; Mars, Roger B
2016-01-01
Cluster-based analysis methods in neuroimaging provide control of whole-brain false positive rates without the need to conservatively correct for the number of voxels and the associated false negative results. The current method defines clusters based purely on shapes in the landscape of activation, instead of requiring the choice of a statistical threshold that may strongly affect results. Statistical significance is determined using permutation testing, combining both size and height of activation. A method is proposed for dealing with relatively small local peaks. Simulations confirm the method controls the false positive rate and correctly identifies regions of activation. The method is also illustrated using real data. •A landscape-based method to define clusters in neuroimaging data avoids the need to pre-specify a threshold to define clusters.•The implementation of the method works as expected, based on simulated and real data.•The recursive method used for defining clusters, the method used for combining clusters, and the definition of the "value" of a cluster may be of interest for future variations.
NASA Technical Reports Server (NTRS)
Harman, Richard R.
2006-01-01
The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.
NASA Technical Reports Server (NTRS)
Harman, Richard R.
2006-01-01
The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation.
Adaptable Iterative and Recursive Kalman Filter Schemes
NASA Technical Reports Server (NTRS)
Zanetti, Renato
2014-01-01
Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
Parameter estimation and model selection in computational biology.
Lillacci, Gabriele; Khammash, Mustafa
2010-03-05
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
ERIC Educational Resources Information Center
Olson, Alton T.
1989-01-01
Discusses the use of the recursive method to permutations of n objects and a problem making c cents in change using pennies and nickels when order is important. Presents a LOGO program for the examples. (YP)
Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert
2014-01-01
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID
Method for estimating solubility parameter
NASA Technical Reports Server (NTRS)
Lawson, D. D.; Ingham, J. D.
1973-01-01
Semiempirical correlations have been developed between solubility parameters and refractive indices for series of model hydrocarbon compounds and organic polymers. Measurement of intermolecular forces is useful for assessment of material compatibility, glass-transition temperature, and transport properties.
Parameter estimation by genetic algorithms
Reese, G.M.
1993-11-01
Test/Analysis correlation, or structural identification, is a process of reconciling differences in the structural dynamic models constructed analytically (using the finite element (FE) method) and experimentally (from modal test). This is a methodology for assessing the reliability of the computational model, and is very important in building models of high integrity, which may be used as predictive tools in design. Both the analytic and experimental models evaluate the same quantities: the natural frequencies (or eigenvalues, ({omega}{sub i}), and the mode shapes (or eigenvectors, {var_phi}). In this paper, selected frequencies are reconciled in the two models by modifying physical parameters in the FE model. A variety of parameters may be modified such as the stiffness of a joint member or the thickness of a plate. Engineering judgement is required to identify important frequencies, and to characterize the uncertainty of the model design parameters.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
A parameter estimation subroutine package
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Nead, M. W.
1978-01-01
Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
Nam, Kanghyun
2015-11-11
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.
Estimation of ground motion parameters
Boore, David M.; Joyner, W.B.; Oliver, A.A.; Page, R.A.
1978-01-01
Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. A subset of the data (from the San Fernando earthquake) is used to assess the effects of structural size and of geologic site conditions on peak motions recorded at the base of structures. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. The peak acceleration tends to b3e less and the peak velocity and displacement tend to be greater on the average at the base of large structures than at the base of small structures. In the distance range used in the regression analysis (15-100 km) the values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Some consideration is given to the prediction of ground motions at close distances where there are insufficient recorded data points. As might be expected from the lack of data, published relations for predicting peak horizontal acceleration give widely divergent estimates at close distances (three well known relations predict accelerations between 0.33 g to slightly over 1 g at a distance of 5 km from a magnitude 6.5 earthquake). After considering the physics of the faulting process, the few available data close to faults, and the modifying effects of surface topography, at the present time it would be difficult to accept estimates less than about 0.8 g, 110 cm/s, and 40 cm, respectively, for the mean values of peak acceleration, velocity, and displacement at rock sites
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2017-06-01
We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.
Estimating random signal parameters from noisy images with nuisance parameters
Whitaker, Meredith Kathryn; Clarkson, Eric; Barrett, Harrison H.
2008-01-01
In a pure estimation task, an object of interest is known to be present, and we wish to determine numerical values for parameters that describe the object. This paper compares the theoretical framework, implementation method, and performance of two estimation procedures. We examined the performance of these estimators for tasks such as estimating signal location, signal volume, signal amplitude, or any combination of these parameters. The signal is embedded in a random background to simulate the effect of nuisance parameters. First, we explore the classical Wiener estimator, which operates linearly on the data and minimizes the ensemble mean-squared error. The results of our performance tests indicate that the Wiener estimator can estimate amplitude and shape once a signal has been located, but is fundamentally unable to locate a signal regardless of the quality of the image. Given these new results on the fundamental limitations of Wiener estimation, we extend our methods to include more complex data processing. We introduce and evaluate a scanning-linear estimator that performs impressively for location estimation. The scanning action of the estimator refers to seeking a solution that maximizes a linear metric, thereby requiring a global-extremum search. The linear metric to be optimized can be derived as a special case of maximum a posteriori (MAP) estimation when the likelihood is Gaussian and a slowly varying covariance approximation is made. PMID:18545527
ESTIM: A parameter estimation computer program: Final report
Hills, R.G.
1987-08-01
The computer code, ESTIM, enables subroutine versions of existing simulation codes to be used to estimate model parameters. Nonlinear least squares techniques are used to find the parameter values that result in a best fit between measurements made in the simulation domain and the simulation code's prediction of these measurements. ESTIM utilizes the non-linear least square code DQED (Hanson and Krogh (1982)) to handle the optimization aspects of the estimation problem. In addition to providing weighted least squares estimates, ESTIM provides a propagation of variance analysis. A subroutine version of COYOTE (Gartling (1982)) is provided. The use of ESTIM with COYOTE allows one to estimate the thermal property model parameters that result in the best agreement (in a least squares sense) between internal temperature measurements and COYOTE's predictions of these internal temperature measurements. We demonstrate the use of ESTIM through several example problems which utilize the subroutine version of COYOTE.
Estimation of ground motion parameters
Boore, David M.; Oliver, Adolph A.; Page, Robert A.; Joyner, William B.
1978-01-01
Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. Data from the San Fernando earthquake are examined to assess the effects of associated structures and of geologic site conditions on peak recorded motions. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity, and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. Values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Three recently published relationships for predicting peak horizontal acceleration are compared and discussed. Considerations are reviewed relevant to ground motion predictions at close distances where there are insufficient recorded data points.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Parameter Estimation using Numerical Merger Waveforms
NASA Technical Reports Server (NTRS)
Thorpe, J. I.; McWilliams, S.; Kelly, B.; Fahey, R.; Arnaud, K.; Baker, J.
2008-01-01
Results: Developed parameter estimation model integrating complete waveforms and improved instrumental models. Initial results for equal-mass non-spinning systems indicate moderate improvement in most parameters, significant improvement in some Near-term improvement: a) Improved statistics; b) T-channel; c) Larger parameter space coverage. Combination with other results: a) Higher harmonics; b) Spin precession; c) Instrumental effects.
Missing Data and IRT Item Parameter Estimation.
ERIC Educational Resources Information Center
DeMars, Christine
The situation of nonrandomly missing data has theoretically different implications for item parameter estimation depending on whether joint maximum likelihood or marginal maximum likelihood methods are used in the estimation. The objective of this paper is to illustrate what potentially can happen, under these estimation procedures, when there is…
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
NASA Astrophysics Data System (ADS)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-09-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Seo, Dong-Jun
2017-03-01
This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.
An online recursive autocalibration of triaxial accelerometer.
Lin Ye; Su, Steven W; Dong Lei; Nguyen, Hung T
2016-08-01
In this paper, we proposed a novel method for autocalibration of triaxial Micro-Electro-Mechanical systems (MEMS) accelerometer that does not require any sophisticated laboratory facilities. In particular, this method is an online calibration method which can be conveniently implemented with the accuracy of MEMS accelerometer being significantly improved. The procedure exploits the fact that the output vector of the accelerometer must match the local gravity in static state condition. To achieve online calibration, the model as well as the cost function are linearized at the beginning, and an online recursive method is then utilized to identify the unknown parameters and remove the bias caused by linearization. This online recursive method is based on damped recursive least square estimation (DRLS), which can significantly reduce the calculation complexity comparing to nonlinear optimization method. In addition, the unknown parameters can be solved in a short time and the estimated parameters can remain stable during calibration. Experimentally, this method was tested by comparing the output results before and after calibration in different condition. It showed that the output, after calibrated by the proposed method, is more accurate with respect to raw output using default factory parameters.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
NASA Astrophysics Data System (ADS)
Conan, Rodolphe
2014-07-01
The estimation of a corrugated wavefront after propagation through the atmosphere is usually solved optimally with a Minimum-Mean-Square-Error algorithm. The derivation of the optimal wavefront can be a very computing intensive task especially for large Adaptive Optics (AO) systems that operates in real-time. For the largest AO systems, efficient optimal wavefront reconstructor have been proposed either using sparse matrix techniques or relying on the fractal properties of the atmospheric wavefront. We propose a new method that exploits the Toeplitz structure in the covariance matrix of the wavefront gradient. The algorithm is particularly well-suited to Shack-Hartmann wavefront sensor based AO systems. Thanks to the Toeplitz structure of the covariance, the matrices are compressed up to a thousand-fold and the matrix-to-vector product is reduced to a simple one-dimension convolution product. The optimal wavefront is estimated iteratively with the MINRES algorithm which exhibits better convergence properties for ill-conditioned matrices than the commonly used Conjugate Gradient algorithm. The paper describes, in a first part, the Toeplitz structure of the covariance matrices and shows how to compute the matrix-to-vector product using only the compressed version of the matrices. In a second part, we introduced the MINRES iterative solver and shows how it performs compared to the Conjugate Gradient algorithm for different AO systems.
Quantifying uncertainty in state and parameter estimation.
Parlitz, Ulrich; Schumann-Bischoff, Jan; Luther, Stefan
2014-05-01
Observability of state variables and parameters of a dynamical system from an observed time series is analyzed and quantified by means of the Jacobian matrix of the delay coordinates map. For each state variable and each parameter to be estimated, a measure of uncertainty is introduced depending on the current state and parameter values, which allows us to identify regions in state and parameter space where the specific unknown quantity can(not) be estimated from a given time series. The method is demonstrated using the Ikeda map and the Hindmarsh-Rose model.
Maximum Likelihood Estimation of Population Parameters
Fu, Y. X.; Li, W. H.
1993-01-01
One of the most important parameters in population genetics is θ = 4N(e)μ where N(e) is the effective population size and μ is the rate of mutation per gene per generation. We study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter θ over existing methods and the other is the estimation of parameter λ which is the ratio of two θ's. The minimum variances of estimates of the parameter θ are derived under two idealized situations. These minimum variances serve as the lower bounds of the variances of all possible estimates of θ in practice. We then show that Watterson's estimate of θ based on the number of segregating sites is asymptotically an optimal estimate of θ. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when θ is large. The maximum likelihood estimate of λ = θ(1)/θ(2) is obtained and the properties of the estimate are discussed. PMID:8375660
Parameter estimation for distributed parameter models of complex, flexible structures
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr.
1991-01-01
Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.
MODFLOW-Style parameters in underdetermined parameter estimation.
D'Oria, Marco; Fienen, Michael N
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, M.; Fienen, M.N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.
Reionization history and CMB parameter estimation
Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y. E-mail: gnedin@fnal.edu
2013-05-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.
Parameter estimation methods for chaotic intercellular networks.
Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.
GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by
GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by
Component reliability parameters estimation considering weather factors
NASA Astrophysics Data System (ADS)
Wang, Yuan
2017-08-01
In order to study influence of power system reliability parameters under the complex weather, this paper presents a component reliability parameter estimation method based on the one directional S-rough law and F-decomposition law. The S-rough rule handles the failure rate and the uncertain weather conditions. According to this, it establishes reliable parameter estimation model. Then the power system reliability evaluation adds the F-decomposition law analysis to form the interfere metrics and analyzes influence of component reliability parameters under different weather combinations. In the numerical example verification, the results show that the proposed model can predict the transmission line failure rate, and reveals the influence regularity of component reliability parameter. Based on the objective data (climate in many years), the proposed method can eliminate the influence of subjective factor. Evaluation results are more objective, and provide important information for the power system reliability evaluation.
Estimating nuisance parameters in inverse problems
NASA Astrophysics Data System (ADS)
Aravkin, Aleksandr Y.; van Leeuwen, Tristan
2012-11-01
Many inverse problems include nuisance parameters which, while not of direct interest, are required to recover primary parameters. The structure of these problems allows efficient optimization strategies—a well-known example is variable projection, where nonlinear least-squares problems which are linear in some parameters can be very efficiently optimized. In this paper, we extend the idea of projecting out a subset over the variables to a broad class of maximum likelihood and maximum a posteriori likelihood problems with nuisance parameters, such as variance or degrees of freedom (d.o.f.). As a result, we are able to incorporate nuisance parameter estimation into large-scale constrained and unconstrained inverse problem formulations. We apply the approach to a variety of problems, including estimation of unknown variance parameters in the Gaussian model, d.o.f. parameter estimation in the context of robust inverse problems, and automatic calibration. Using numerical examples, we demonstrate improvement in recovery of primary parameters for several large-scale inverse problems. The proposed approach is compatible with a wide variety of algorithms and formulations, and its implementation requires only minor modifications to existing algorithms.
Maximum likelihood estimation of population parameters
Fu, Y.X.; Li, W.H. )
1993-08-01
One of the most important parameters in population genetics is [theta] = 4N[sub e][mu] where N[sub e] is the effective population size and [mu] is the rate of mutation per gene per generation. The authors study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter [theta] over existing methods and the other is the estimation of parameter [lambda] which is the ratio of two [theta]'s. The minimum variances serve as the lower bounds of the variances of all possible estimates of [theta] in practice. The authors then show that Watterson's estimate of [theta] based on the number of segregating sites is asymptotically an optimal estimate of [theta]. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when [theta] is large. The maximum likelihood estimate of [lambda] = [theta][sub 1]/[theta][sub 2] is obtained and the properties of the estimate are discussed. 9 refs., 3 figs., 3 tabs.
Estimating Respiratory Mechanical Parameters during Mechanical Ventilation
Barbini, Paolo
1982-01-01
We propose an algorithm for the estimation of the parameters of the mechanical respiratory system. The algorithm is based on non linear regression analysis with a two-compartment respiratory system model. The model used allows us to take account of the non homogeneous properties of the lungs which may cause uneven distribution of ventilation and thus affect the gas exchange in the lungs. The estimation of the parameters of such a model permits the optimization of the type of ventilation to be used in patients undergoing respiratory treatment. This can be done bearing in mind the effects of the mechanical ventilation on venous return as well as the quality of gas exchange. We have valued the performances of the estimation algorithm which is proposed on the basis of the agreement between the data and the model response, of the stability of the parameter estimates and of the standard deviations of the parameters. The parameter estimation algorithm described does not have recourse to the examination of the impedance spectra and is completely independent of the type of ventilator employed.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Frequency tracking and parameter estimation for robust quantum state estimation
Ralph, Jason F.; Jacobs, Kurt; Hill, Charles D.
2011-11-15
In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.
LISA Parameter Estimation using Numerical Merger Waveforms
NASA Technical Reports Server (NTRS)
Thorpe, J. I.; McWilliams, S.; Baker, J.
2008-01-01
Coalescing supermassive black holes are expected to provide the strongest sources for gravitational radiation detected by LISA. Recent advances in numerical relativity provide a detailed description of the waveforms of such signals. We present a preliminary study of LISA's sensitivity to waveform parameters using a hybrid numerical/analytic waveform describing the coalescence of two equal-mass, nonspinning black holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the waveform parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10(exp 6) deg M solar mass at a redshift of z is approximately 1 were found to decrease by a factor of slightly more than two when the merger was included.
On parameter estimation in population models.
Ross, J V; Taimre, T; Pollett, P K
2006-12-01
We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population.
Comparison of Dam Breach Parameter Estimators
2008-01-01
from a large storm in 1975 (CEATI). The dam was constructed of a clay core containing shale. The upstream and downstream fill was homogeneous earth ...Comparison of Dam Breach Parameter Estimators D. Michael Gee1 1 Senior Hydraulic Engineer, Corps of Engineers Hydrologic Engineering...Center, 609 2nd St., Davis, CA 95616; email: michael.gee@usace.army.mil. ABSTRACT Analytical techniques for the estimation of dam breach
ZASPE: Zonal Atmospheric Stellar Parameters Estimator
NASA Astrophysics Data System (ADS)
Brahm, Rafael; Jordan, Andres; Hartman, Joel; Bakos, Gaspar
2016-07-01
ZASPE (Zonal Atmospheric Stellar Parameters Estimator) computes the atmospheric stellar parameters (Teff, log(g), [Fe/H] and vsin(i)) from echelle spectra via least squares minimization with a pre-computed library of synthetic spectra. The minimization is performed only in the most sensitive spectral zones to changes in the atmospheric parameters. The uncertainities and covariances computed by ZASPE assume that the principal source of error is the systematic missmatch between the observed spectrum and the sythetic one that produces the best fit. ZASPE requires a grid of synthetic spectra and can use any pre-computed library minor modifications.
Attitude Estimation Using Modified Rodrigues Parameters
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis
1996-01-01
In this paper, a Kalman filter formulation for attitude estimation is derived using the Modified Rodrigues Parameters. The extended Kalman filter uses a gyro-based model for attitude propagation. Two solutions are developed for the sensitivity matrix in the Kalman filter. One is based upon an additive error approach, and the other is based upon a multiplicative error approach. It is shown that the two solutions are in fact equivalent. The Kalman filter is then used to estimate the attitude of a simulated spacecraft. Results indicate that then new algorithm produces accurate attitude estimates by determining actual gyro biases.
Parameter Estimation Methods for Chaotic Intercellular Networks
Mariño, Inés P.; Ullner, Ekkehard; Zaikin, Alexey
2013-01-01
We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization–based framework for parameter estimation in coupled chaotic systems with some state–of–the–art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non–parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC–based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of “populations”, i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes. PMID:24282513
Effects of model deficiencies on parameter estimation
NASA Technical Reports Server (NTRS)
Hasselman, T. K.
1988-01-01
Reliable structural dynamic models will be required as a basis for deriving the reduced-order plant models used in control systems for large space structures. Ground vibration testing and model verification will play an important role in the development of these models; however, fundamental differences between the space environment and earth environment, as well as variations in structural properties due to as-built conditions, will make on-orbit identification essential. The efficiency, and perhaps even the success, of on-orbit identification will depend on having a valid model of the structure. It is envisioned that the identification process will primarily involve parametric methods. Given a correct model, a variety of estimation algorithms may be used to estimate parameter values. This paper explores the effects of modeling errors and model deficiencies on parameter estimation by reviewing previous case histories. The effects depend at least to some extent on the estimation algorithm being used. Bayesian estimation was used in the case histories presented here. It is therefore conceivable that the behavior of an estimation algorithm might be useful in detecting and possibly even diagnosing deficiencies. In practice, the task is complicated by the presence of systematic errors in experimental procedures and data processing and in the use of the estimation procedures themselves.
New approaches to estimation of magnetotelluric parameters
Egbert, G.D.
1991-01-01
Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.
Reliability of parameter estimation in respirometric models.
Checchi, Nicola; Marsili-Libelli, Stefano
2005-09-01
When modelling a biochemical system, the fact that model parameters cannot be estimated exactly stimulates the definition of tests for checking unreliable estimates and design better experiments. The method applied in this paper is a further development from Marsili-Libelli et al. [2003. Confidence regions of estimated parameters for ecological systems. Ecol. Model. 165, 127-146.] and is based on the confidence regions computed with the Fisher or the Hessian matrix. It detects the influence of the curvature, representing the distortion of the model response due to its nonlinear structure. If the test is passed then the estimation can be considered reliable, in the sense that the optimisation search has reached a point on the error surface where the effect of nonlinearities is negligible. The test is used here for an assessment of respirometric model calibration, i.e. checking the experimental design and estimation reliability, with an application to real-life data in the ASM context. Only dissolved oxygen measurements have been considered, because this is a very popular experimental set-up in wastewater modelling. The estimation of a two-step nitrification model using batch respirometric data is considered, showing that the initial amount of ammonium-N and the number of data play a crucial role in obtaining reliable estimates. From this basic application other results are derived, such as the estimation of the combined yield factor and of the second step parameters, based on a modified kinetics and a specific nitrite experiment. Finally, guidelines for designing reliable experiments are provided.
Estimating physiological skin parameters from hyperspectral signatures
NASA Astrophysics Data System (ADS)
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
Estimating physiological skin parameters from hyperspectral signatures.
Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe
2013-05-01
We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Aquifer parameter estimation from surface resistivity data.
Niwas, Sri; de Lima, Olivar A L
2003-01-01
This paper is devoted to the additional use, other than ground water exploration, of surface geoelectrical sounding data for aquifer hydraulic parameter estimation. In a mesoscopic framework, approximated analytical equations are developed separately for saline and for fresh water saturations. A few existing useful aquifer models, both for clean and shaley sandstones, are discussed in terms of their electrical and hydraulic effects, along with the linkage between the two. These equations are derived for insight and physical understanding of the phenomenon. In a macroscopic scale, a general aquifer model is proposed and analytical relations are derived for meaningful estimation, with a higher level of confidence, of hydraulic parameter from electrical parameters. The physical reasons for two different equations at the macroscopic level are explicitly explained to avoid confusion. Numerical examples from existing literature are reproduced to buttress our viewpoint.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Computational approaches for RNA energy parameter estimation
Andronescu, Mirela; Condon, Anne; Hoos, Holger H.; Mathews, David H.; Murphy, Kevin P.
2010-01-01
Methods for efficient and accurate prediction of RNA structure are increasingly valuable, given the current rapid advances in understanding the diverse functions of RNA molecules in the cell. To enhance the accuracy of secondary structure predictions, we developed and refined optimization techniques for the estimation of energy parameters. We build on two previous approaches to RNA free-energy parameter estimation: (1) the Constraint Generation (CG) method, which iteratively generates constraints that enforce known structures to have energies lower than other structures for the same molecule; and (2) the Boltzmann Likelihood (BL) method, which infers a set of RNA free-energy parameters that maximize the conditional likelihood of a set of reference RNA structures. Here, we extend these approaches in two main ways: We propose (1) a max-margin extension of CG, and (2) a novel linear Gaussian Bayesian network that models feature relationships, which effectively makes use of sparse data by sharing statistical strength between parameters. We obtain significant improvements in the accuracy of RNA minimum free-energy pseudoknot-free secondary structure prediction when measured on a comprehensive set of 2518 RNA molecules with reference structures. Our parameters can be used in conjunction with software that predicts RNA secondary structures, RNA hybridization, or ensembles of structures. Our data, software, results, and parameter sets in various formats are freely available at http://www.cs.ubc.ca/labs/beta/Projects/RNA-Params. PMID:20940338
Online in-situ estimation of network parameters under intermittent excitation conditions
NASA Astrophysics Data System (ADS)
Taylor, Jason Ashley
2008-10-01
Online in-situ estimation of network parameters is a potential tool to evaluate electrical network and conductor health. The integration of the physics-based models with stochastic models can provide important diagnostic and prognostic information. Correct diagnoses and prognoses using the model-based techniques therefore depend on accurate estimations of the physical parameters. As artificial excitation of the modeled dynamics is not always possible for in-situ applications, the information necessary to make accurate estimations can be intermittent over time. Continuous online estimation and tracking of physics-based parameters using recursive least-squares with directional forgetting is proposed to account for the intermittency in the excitation. This method makes optimal use of the available information while still allowing the solution to following time-varying parameter changes. Computationally efficient statistical inference measures are also provided to gauge the confidence of each parameter estimate. Additionally, identification requirements of the methods and multiple network and conductor models are determined. Finally, the method is shown to be effective in estimating and tracking parameter changes in both the DC and AC networks as well as both time and frequency domain models.
State-parameter estimation of real-time river water quality
Hardy, T.B.
1988-01-01
Absolute and relative algorithm performance was evaluate for three joint state-parameter estimation algorithms. The algorithms, designated MISP, AMISP, and REVS, represent state and parameter uncertainty differently within their structures. All three recursive algorithms employ two parallel Kalman filters for the joint estimation of system states and model parameters. They are applied to a model of the DO-BOD interactions in the River Cam, England, to synthetic water quality data sets generated by a Monte Carlo measurement error model and to data from a Monte Carlo river water quality simulation model. Sensitivity runs on changes in error statistics are also considered. A constrained Monte Carlo simulation scheme was developed to reduce the ensemble of Monte Carlo generated data sets to a smaller, a priori defined, acceptable model response space for use in algorithm comparisons. Absolute and relative performance for state estimation was evaluated in terms of stability, deterministic and one-step-ahead model response errors and mean square errors. Parameter estimation was evaluated in terms of parameter convergence rates and estimation errors. An event detection system based on exploitation of algorithm sensitivities is proposed. Event detection is accomplished by monitoring instability in state and parameter estimates for the three algorithms running in a parallel processing configuration.
Target parameter and error estimation using magnetometry
NASA Astrophysics Data System (ADS)
Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.
The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.
Cosmological parameter estimation: impact of CMB aberration
Catena, Riccardo; Notari, Alessio E-mail: notari@ffn.ub.es
2013-04-01
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a{sub lm}'s via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.
Estimating RASATI scores using acoustical parameters
NASA Astrophysics Data System (ADS)
Agüero, P. D.; Tulli, J. C.; Moscardi, G.; Gonzalez, E. L.; Uriz, A. J.
2011-12-01
Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.
Multiple emitter location and signal parameter estimation
NASA Astrophysics Data System (ADS)
Schmidt, R. O.
1986-03-01
Multiple signal classification (MUSIC) techniques involved in determining the parameters of multiple wavefronts arriving at an antenna array are discussed. A MUSIC algorithm is described, which provides asymptotically unbiased estimates of (1) the number of signals, (2) directions of arrival (or emitter locations), (3) strengths and cross correlations among the incident waveforms, and (4) the strength of noise/interference. The example of the use of the algorithm as a multiple frequency estimator operating on time series is examined. Comparisons of this method with methods based on maximum likelihood and maximum entropy, as well as conventional beamforming, are presented.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests
Renal parameter estimates in unrestrained dogs
NASA Technical Reports Server (NTRS)
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
Rapid Compact Binary Coalescence Parameter Estimation
NASA Astrophysics Data System (ADS)
Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong
2016-03-01
The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; ...
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
Renal parameter estimates in unrestrained dogs
NASA Technical Reports Server (NTRS)
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
Bayesian parameter estimation for effective field theories
NASA Astrophysics Data System (ADS)
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Quantum parameter estimation with optimal control
NASA Astrophysics Data System (ADS)
Liu, Jing; Yuan, Haidong
2017-07-01
A pivotal task in quantum metrology, and quantum parameter estimation in general, is to design schemes that achieve the highest precision with the given resources. Standard models of quantum metrology usually assume that the dynamics is fixed and that the highest precision is achieved by preparing the optimal probe states and performing optimal measurements. However, in many practical experimental settings, additional controls are usually available to alter the dynamics. Here we propose to use optimal control methods for further improvement of the precision limit of quantum parameter estimation. We show that, by exploring the additional degree of freedom offered by the controls, a higher-precision limit can be achieved. In particular, we show that the precision limit under the controlled schemes can go beyond the constraints put by the coherent time, which is in contrast with the standard scheme where the precision limit is always bounded by the coherent time.
Estimating Grammar Parameters using Bounded Memory
2002-01-01
algorithm, called HOLA , for estimating the parameters of SCFGs that computes summary statis- tics for each string as it is observed and then discards...the string. The memory used by HOLA is bounded by the size of the grammar, not by the amount of training data. Empirical results show that HOLA ...of the grammar improves monotonically as more computation is allocated to learning. This paper introduces an algorithm called HOLA that satisfies
Estimated hydrogeological parameters by artificial neurons network
NASA Astrophysics Data System (ADS)
Lin, H.; Chen, C.; Tan, Y.; Ke, K.
2009-12-01
In recent years, many approaches had been developed using artificial neurons network (ANN) model cooperated with Theis analytical solution to estimate the effective hydrological parameters for the homogenous and isotropic porous media, such as Lin and Chen approach [Lin and Chen, 2006] (or called the ANN approach hereafter), PC-ANN approach [Samani et al., 2008]. The above methods assumed a full superimposition of the type curve and the observed drawdown, and tried to use the first time-drawdown data as a match point to make a fine approximation of the effective parameters. However, using the first time-drawdown data or the early time-drawdown data is not always correct for the estimation of the hydrological parameters, especially for heterogeneous and anisotropic aquifers. Therefore, this paper mainly corrected the concept of superimposed plot by modifying the ANN approach and PC-ANN approach, as well as cooperating with Papadopoulos analytical solution, to estimate the transmissivities and storage coefficient for anisotropic, heterogeneous aquifers. The ANN model is trained with 4000 training sets of the well function, and tested with 1000 sets and 300 sets of synthetic time-drawdown generated from homogonous and heterogonous parameters, respectively. In-situ observation data, the time-drawdown at station Shi-Chou of the Chihuahua River alluvial fan, Taiwan, is further adopted to test the applicability and reliability of proposed methods, as well as comparing with Straight-line method and Type-curve method. Results suggested that both of the modified methods had better performance than the original ones. Using late time drawdown to optimize the effective parameters is shown better than using early-time drawdown. Additionally, results indicated that the modified ANN approach is better than the modified PC-ANN approach in terms of precision, while the efficiency of the modified PC-ANN approach is approximately three times better than the modified ANN approach.
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Parameter Estimation of a Spiking Silicon Neuron
Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph
2012-01-01
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978
Online Dynamic Parameter Estimation of Synchronous Machines
NASA Astrophysics Data System (ADS)
West, Michael R.
Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Parameter estimate of signal transduction pathways
Arisi, Ivan; Cattaneo, Antonino; Rosato, Vittorio
2006-01-01
Background The "inverse" problem is related to the determination of unknown causes on the bases of the observation of their effects. This is the opposite of the corresponding "direct" problem, which relates to the prediction of the effects generated by a complete description of some agencies. The solution of an inverse problem entails the construction of a mathematical model and takes the moves from a number of experimental data. In this respect, inverse problems are often ill-conditioned as the amount of experimental conditions available are often insufficient to unambiguously solve the mathematical model. Several approaches to solving inverse problems are possible, both computational and experimental, some of which are mentioned in this article. In this work, we will describe in details the attempt to solve an inverse problem which arose in the study of an intracellular signaling pathway. Results Using the Genetic Algorithm to find the sub-optimal solution to the optimization problem, we have estimated a set of unknown parameters describing a kinetic model of a signaling pathway in the neuronal cell. The model is composed of mass action ordinary differential equations, where the kinetic parameters describe protein-protein interactions, protein synthesis and degradation. The algorithm has been implemented on a parallel platform. Several potential solutions of the problem have been computed, each solution being a set of model parameters. A sub-set of parameters has been selected on the basis on their small coefficient of variation across the ensemble of solutions. Conclusion Despite the lack of sufficiently reliable and homogeneous experimental data, the genetic algorithm approach has allowed to estimate the approximate value of a number of model parameters in a kinetic model of a signaling pathway: these parameters have been assessed to be relevant for the reproduction of the available experimental data. PMID:17118160
Optimal linear estimation of binary star parameters
NASA Astrophysics Data System (ADS)
Burke, Daniel; Devaney, Nicholas; Gladysz, Szymon; Barrett, Harrisson H.; Whitaker, Meredith K.; Caucci, Luca
2008-07-01
We propose a new post-processing technique for the detection of faint companions and the estimation of their parameters from adaptive optics (AO) observations. We apply the optimal linear detector, which is the Hotelling observer, to perform detection, astrometry and photometry on real and simulated data. The real data was obtained from the AO system on the 3m Lick telescope1. The Hotelling detector, which is a prewhitening matched filter, calculates the Hotelling test statistic which is then compared to a threshold. If the test statistic is greater than the threshold the algorithm decides that a companion is present. This decision is the main task performed by the Hotelling observer. After a detection is made the location and intensity of the companion which maximise this test statistic are taken as the estimated values. We compare the Hotelling approach with current detection algorithms widely used in astronomy. We discuss the use of the estimation receiver operating characteristic (EROC) curve in quantifying the performance of the algorithm with no prior estimate of the companion's location or intensity. The robustness of this technique to errors in point spread function (PSF) estimation is also investigated.
Parameter estimation in tree graph metabolic networks
Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings. PMID:27688960
Toward unbiased estimations of the statefinder parameters
NASA Astrophysics Data System (ADS)
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
Uncertainty relation based on unbiased parameter estimations
NASA Astrophysics Data System (ADS)
Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing
2017-02-01
Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
Compressing measurements in quantum dynamic parameter estimation
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Cooper, Alexandre; Cappellaro, Paola
2013-12-01
We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with quantum control methods to significantly reduce the number of signal coefficients that are required for reconstruction of time-varying parameters with high fidelity. We show that incoherent measurement bases and, more generally, suitable random measurement matrices can be created by performing simple control sequences on the quantum system. Random measurement matrices satisfying the restricted isometry property can be used efficiently to reconstruct signals that are sparse in any basis. Because many physical processes are approximately sparse in some basis, these methods can benefit a variety of applications such as quantum sensing and magnetometry with nitrogen-vacancy centers.
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Parameter estimation techniques for LTP system identification
NASA Astrophysics Data System (ADS)
Nofrarias Serra, Miquel
LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.
Parameter Estimation of Spacecraft Fuel Slosh Model
NASA Technical Reports Server (NTRS)
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
Viscoelastic parameter estimation based on spectral analysis.
Eskandari, H; Salcudean, S E; Rohling, R
2008-07-01
This paper introduces a new technique for the robust estimation of relaxation-time distribution in tissue. The main novelty is in the use of the phase of transfer functions calculated from a time series of strain measurements at multiple locations. Computer simulations with simulated measurement noise demonstrate the feasibility of the approach. An experimental apparatus and software were developed to confirm the simulations. The setup can be used both as a rheometer to characterize the overall mechanical properties of a material or as a vibro-elastography imaging device using an ultrasound system. The algorithms were tested on tissue mimicking phantoms specifically developed to exhibit contrast in elasticity and relaxation time. The phantoms were constructed using a combination of gelatin and a polyvinyl alcohol sponge to produce the desired viscoelastic properties. The tissue parameters were estimated and the elasticity and relaxation time of the materials have been used as complementary features to distinguish different materials. The estimation results are consistent with the rheometry, verifying that the relaxation time can be used as a complementary feature to elasticity to delineate the mechanical properties of the phantom.
Poincaré dodecahedral space parameter estimates
NASA Astrophysics Data System (ADS)
Roukema, B. F.; Buliński, Z.; Gaudin, N. E.
2008-12-01
Context: Several studies have proposed that the preferred model of the comoving spatial 3-hypersurface of the Universe may be a Poincaré dodecahedral space (PDS) rather than a simply connected, infinite, flat space. Aims: Here, we aim to improve the surface of last scattering (SLS) optimal cross-correlation method and apply this to observational data and simulations. Methods: For a given “generalised” PDS orientation, we analytically derive the formulae required to exclude points on the sky that cannot be members of close SLS-SLS cross-pairs. These enable more efficient pair selection without sacrificing the uniformity of the underlying selection process. For a sufficiently small matched circle size α and a fixed number of randomly placed points selected for a cross-correlation estimate, the calculation time is decreased and the number of pairs per separation bin is increased. Using this faster method, and including the smallest separation bin when testing correlations, (i) we recalculate Monte Carlo Markov Chains (MCMC) on the five-year Wilkinson Microwave Anisotropy Probe (WMAP) data; and (ii) we seek PDS solutions in a small number of Gaussian random fluctuation (GRF) simulations in order to further explore the statistical significance of the PDS hypothesis. Results: For 5° < α < 60^circ, a calculation speed-up of 3-10 is obtained. (i) The best estimates of the PDS parameters for the five-year WMAP data are similar to those for the three-year data; (ii) comparison of the optimal solutions found by the MCMC chains in the observational map to those found in the simulated maps yields a slightly stronger rejection of the simply connected model using α rather than the twist angle φ. The best estimate of α implies that, given a large-scale auto-correlation as weak as that observed, the PDS-like cross-correlation signal in the WMAP data is expected with a probability of less than about 10%. The expected distribution of φ from the GRF simulations is not
NASA Technical Reports Server (NTRS)
Sunahara, Y.; Kojima, F.
1987-01-01
The purpose of this paper is to establish a method for identifying unknown parameters involved in the boundary state of a class of diffusion systems under noisy observations. A mathematical model of the system dynamics is given by a two-dimensional diffusion equation. Noisy observations are made by sensors allocated on the system boundary. Starting with the mathematical model mentioned above, an online parameter estimation algorithm is proposed within the framework of the maximum likelihood estimation. Existence of the optimal solution and related necessary conditions are discussed. By solving a local variation of the cost functional with respect to the perturbation of parameters, the estimation mechanism is proposed in a form of recursive computations. Finally, the feasibility of the estimator proposed here is demonstrated through results of digital simulation experiments.
Noncoherent sampling technique for communications parameter estimations
NASA Technical Reports Server (NTRS)
Su, Y. T.; Choi, H. J.
1985-01-01
This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.
Noncoherent sampling technique for communications parameter estimations
NASA Technical Reports Server (NTRS)
Su, Y. T.; Choi, H. J.
1985-01-01
This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.
Recursive least-squares learning algorithms for neural networks
Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)
1990-01-01
This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.
Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters
1975-12-31
AD-A021 208 POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS Ramesh Chandra Union College...I 00 064098 O < POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS AES-7514 ■ - 1976...AES-7514 2 COVT ACCESSION NO * TITLC fan« Submit) Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.
ERIC Educational Resources Information Center
Lord, Frederic M.
There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…
Parameter estimation with Sandage-Loeb test
Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin E-mail: jfzhang@mail.neu.edu.cn
2014-12-01
The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.
Estimating Tsunami Runup with Fault Plane Parameters
NASA Astrophysics Data System (ADS)
Sepulveda, I.; Liu, P. L. F.
2016-12-01
The forecasting of tsunami runup has often been done by solving numerical models. The execution times, however, make them unsuitable for the purpose of warning. We offer an alternative method that provides analytical relationship between the runup height, the fault plane parameters and the characteristic of coastal bathymetry. The method uses the model of Okada (1985) to estimate the coseismic deformation and the corresponding sea surface displacement (η(x,0)). Once the tsunami waves are generated, Carrier & Greenspan (1958) solution (C&G) is adopted to yield analytical expressions for the shoreline elevation and velocity. Two types of problems are investigated. In the first, the bathymetry is modeled as a constant slope that is connected to a constant depth region, where a seismic event occurs. This is a boundary value problem (BVP). In the second, the bathymetry is further simplified as a constant slope, on which a seismic event occurs. This is an initial value problem (IVP). Both problems are depicted in Figure 1. We derive runup solutions in terms of the fault parameters. The earthquake is associated with vertical coseismic seafloor displacements by using Okada's elastic model. In addition to the simplifications considered in Okada's model, we further assume (1) a strike parallel to the shoreline, (2) a very long rupture area and (3) a fast earthquake so surface elevation mimics the seafloor displacements. Then the tsunami origin is modeled in terms of the fault depth (d), fault width (W), fault slip (s) and dip angle (δ). We describe the solution for the BVP. Madsen & Schaeffer (2010) utilized C&G to derive solutions for the shoreline elevation of sinusoidal waves imposed in the offshore boundary. A linear superposition of this solution represents any arbitrary incident wave. Furthermore, we can prescribe the boundary condition at the toe of sloping beach by adopting the linear shallow wave equations in the constant depth area. By means of a dimensional
Estimation of high altitude Martian dust parameters
NASA Astrophysics Data System (ADS)
Pabari, Jayesh; Bhalodi, Pinali
2016-07-01
Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Buhr, Dianne C.; Algina, James
The focus of this study is on the estimation procedures implemented in BILOG, a computer program. One purpose is to compare the item parameter estimates produced by various procedures available in BILOG. Four different models are used: the one, two, and three parameter model and a three parameter model with common guessing parameters. The results…
Parameter Estimation for the Four Parameter Beta Distribution.
1983-12-01
ESTIMATOR* MME3 SFED 2 TRUE P: .500 TRUE Q’ .500 MEAN MEAN SOIJQARE STD ERROR STD DEV -°0405 .0448 .0093 . 2078 .0591 *1161 .0150 .3356 .2935 9.8500...64128 o1562 3o4931 -4,6183 22o2167 .0421 . 9424 .0002 .0000 .0000 00010 CORRELATION COEFFICIENTS* 11 000 -. 006 1.000 -. 912 .111 1.000 -. 223 .860 .413
Recursion, Language, and Starlings
ERIC Educational Resources Information Center
Corballis, Michael C.
2007-01-01
It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…
Recursion, Language, and Starlings
ERIC Educational Resources Information Center
Corballis, Michael C.
2007-01-01
It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…
Maximum likelihood estimates of polar motion parameters
NASA Technical Reports Server (NTRS)
Wilson, Clark R.; Vicente, R. O.
1990-01-01
Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Updated Item Parameter Estimates Using Sparse CAT Data.
ERIC Educational Resources Information Center
Smith, Robert L.; Rizavi, Saba; Paez, Roxanna; Rotou, Ourania
A study was conducted to investigate whether augmenting the calibration of items using computerized adaptive test (CAT) data matrices produced estimates that were unbiased and improved the stability of existing item parameter estimates. Item parameter estimates from four pools of items constructed for operational use were used in the study to…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Space shuttle propulsion parameter estimation using optional estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.
Noniterative estimation of a nonlinear parameter
NASA Technical Reports Server (NTRS)
Bergstroem, A.
1973-01-01
An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.
A Comparative Study of Distribution System Parameter Estimation Methods
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.
Fast estimation of space-robots inertia parameters: A modular mathematical formulation
NASA Astrophysics Data System (ADS)
Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher
2016-10-01
This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.
Bias in parameter estimation of form errors
NASA Astrophysics Data System (ADS)
Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min
2014-09-01
The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.
Recursive Deadbeat Controller Design
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1997-01-01
This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1997-01-01
An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.
Online parameter estimation for surgical needle steering model.
Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing
2006-01-01
Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Applying Recursive EM to Scene Segmentation
NASA Astrophysics Data System (ADS)
Bachmann, Alexander
In this paper a novel approach for the interdependent task of multiple object tracking and scene segmentation is presented. The method partitions a stereo image sequence of a dynamic 3-dimensional (3D) scene into its most prominent moving groups with similar 3D motion. The unknown set of motion parameters is recursively estimated using an iterated extended Kalman filter (IEKF) which will be derived from the expectation-maximization (EM) algorithm. The EM formulation is used to incorporate a probabilistic data association measure into the tracking process. In a subsequent segregation step, each image point is assigned to the object hypothesis with maximum a posteriori (MAP) probability. Within the association process, which is implemented as labeling problem, a Markov Random Field (MRF) is used to express our expectations on spatial continuity of objects.
Gravitational parameter estimation in a waveguide
NASA Astrophysics Data System (ADS)
Doukas, Jason; Westwood, Luke; Faccio, Daniele; Di Falco, Andrea; Fuentes, Ivette
2014-07-01
We investigate the intrinsic uncertainty in the accuracy to which a static spacetime can be measured from scattering experiments. In particular, we focus on the Schwarzschild black hole and a spatially kinked metric that has some mathematical resemblance to an expanding universe. Under selected conditions we find that the scattering problem can be framed in terms of a lossy bosonic channel, which allows us to identify shot-noise scaling as the ultimate scaling limit to the estimation of the spacetimes. Fock state probes with particle counting measurements attain this ultimate scaling limit and the scaling constants for each spacetime are computed and compared to the practical strategies of coherent state probes with heterodyne and homodyne measurements. A promising avenue to analyze the quantum limit of the analogue spacetimes in optical waveguides is suggested.
FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION
Rodney, Steven A.; Tonry, John L. E-mail: jt@ifa.hawaii.ed
2010-05-20
Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms{sub z} = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored {Lambda}CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Improving the precision of dynamic forest parameter estimates using Landsat
Evan B. Brooks; John W. Coulston; Randolph H. Wynne; Valerie A. Thomas
2016-01-01
The use of satellite-derived classification maps to improve post-stratified forest parameter estimates is wellestablished.When reducing the variance of post-stratification estimates for forest change parameters such as forestgrowth, it is logical to use a change-related strata map. At the stand level, a time series of Landsat images is
Attitudinal Data: Dimensionality and Start Values for Estimating Item Parameters.
ERIC Educational Resources Information Center
Nandakumar, Ratna; Hotchkiss, Larry; Roberts, James S.
The purpose of this study was to assess the dimensionality of attitudinal data arising from unfolding models for discrete data and to compute rough estimates of item and individual parameters for use as starting values in other estimation parameters. One- and two-dimensional simulated test data were analyzed in this study. Results of limited…
Equating Parameter Estimates from the Generalized Graded Unfolding Model.
ERIC Educational Resources Information Center
Roberts, James S.
Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
SFM signal parameter estimation based on an enhanced DSFMT algorithm
NASA Astrophysics Data System (ADS)
Chen, Lei; Li, Xingguang; Chen, Dianren
2017-01-01
It is proposed a SFM signal parameter estimation method based on the Enhanced DSFMT(EDSFMT) algorithm and provided the derivation of transformation formulas in this paper .Analysis and simulations were performed, which proved its capability of arbitrary multi-component SFM signal parameter estimation.
Parameter and state estimator for state space models.
Ding, Ruifeng; Zhuang, Linfan
2014-01-01
This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.
alphaPDE: A new multivariate technique for parameter estimation
Knuteson, B.; Miettinen, H.; Holmstrom, L.
2002-06-01
We present alphaPDE, a new multivariate analysis technique for parameter estimation. The method is based on a direct construction of joint probability densities of known variables and the parameters to be estimated. We show how posterior densities and best-value estimates are then obtained for the parameters of interest by a straightforward manipulation of these densities. The method is essentially non-parametric and allows for an intuitive graphical interpretation. We illustrate the method by outlining how it can be used to estimate the mass of the top quark, and we explain how the method is applied to an ensemble of events containing background.
Distinctive signatures of recursion
Martins, Maurício Dias
2012-01-01
Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed. PMID:22688640
Distinctive signatures of recursion.
Martins, Maurício Dias
2012-07-19
Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed.
Estimating Groundwater Flow Parameters Using Response Surface Methodology
1994-04-01
Best Available Copy AD-A280 630 DTI ELECT’ JUN2 4 ESTIMATING GROUNDWATER FLOW PARAMETERS USING RESPONSE SURFACE METHODOLOGY THESIS Leo C. Adams...GROUNDWATER FLOW PARAMETERS USING RESPONSE SURFACE METHODOLOGY THESIS Presented to the Faculty of the Graduate School of Engineering of the Air Force Institute...Estimating Groundwater Flow Parameters Using Response Surface Methodology Committee Name/Department Signature dvisor. I Col Paul F. Auclair, Ph.D. j . j
Multidimensional Item Response Theory Parameter Estimation with Nonsimple Structure Items
ERIC Educational Resources Information Center
Finch, Holmes
2011-01-01
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Multidimensional Item Response Theory Parameter Estimation with Nonsimple Structure Items
ERIC Educational Resources Information Center
Finch, Holmes
2011-01-01
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Fast Bayesian parameter estimation for stochastic logistic growth models.
Heydari, Jonathan; Lawless, Conor; Lydall, David A; Wilkinson, Darren J
2014-08-01
The transition density of a stochastic, logistic population growth model with multiplicative intrinsic noise is analytically intractable. Inferring model parameter values by fitting such stochastic differential equation (SDE) models to data therefore requires relatively slow numerical simulation. Where such simulation is prohibitively slow, an alternative is to use model approximations which do have an analytically tractable transition density, enabling fast inference. We introduce two such approximations, with either multiplicative or additive intrinsic noise, each derived from the linear noise approximation (LNA) of a logistic growth SDE. After Bayesian inference we find that our fast LNA models, using Kalman filter recursion for computation of marginal likelihoods, give similar posterior distributions to slow, arbitrarily exact models. We also demonstrate that simulations from our LNA models better describe the characteristics of the stochastic logistic growth models than a related approach. Finally, we demonstrate that our LNA model with additive intrinsic noise and measurement error best describes an example set of longitudinal observations of microbial population size taken from a typical, genome-wide screening experiment. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
Research on the estimation method for Earth rotation parameters
NASA Astrophysics Data System (ADS)
Yao, Yibin
2008-12-01
In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Catchment tomography - An approach for spatial parameter estimation
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-09-01
The use of distributed-physically based hydrological models is often hampered by the lack of information on key parameters and their spatial distribution and temporal dynamics. Typically, the estimation of parameter values is impeded by the lack of sufficient observations leading to mathematically underdetermined estimation problems and thus non-uniqueness. Catchment tomography (CT) presents a method to estimate spatially distributed model parameters by resolving the integrated signal of stream runoff in response to precipitation. Basically CT exploits the information content generated by a distributed precipitation signal both in time and space. In a moving transmitter-receiver concept, high resolution, radar based precipitation data are applied with a distributed surface runoff model. Synthetic stream water level observations, serving as receivers, are assimilated with an Ensemble Kalman Filter. With a joint state-parameter update the spatially distributed Manning's roughness coefficient, n, is estimated using the coupled Terrestrial Systems Modelling Platform and the Parallel Data Assimilation Framework (TerrSysMP-PDAF). The sequential data assimilation in combination with the distributed precipitation continuously integrates new information into the model, thus, increasingly constraining the parameter space. With this large amount of data included for the parameter estimation, CT reduces the problem of underdetermined model parameters. The initially biased Manning's coefficients spatially distributed in two and four fixed parameter zones are estimated with errors of less than 3% and 17%, respectively, with only 64 model realizations. It is shown that the distributed precipitation is of major importance for this approach.
Parameter estimation in complex flows with chemical reactions
NASA Astrophysics Data System (ADS)
Robinson, Daniel J.
The estimation of unknown parameters in engineering and scientific models continues to be of great importance in order to validate them to available experimental data. These parameters of concern cannot be known beforehand, but must be measured experimentally, variables such as chemical species concentrations, pressures, or temperatures as examples. Particularly, in chemically reacting flows, the estimation of kinetic rate parameters from experimentally determined values is in great demand and not well understood. New parameter optimization algorithms have been developed from a Gauss-Newton formulation for the estimation of reaction rate parameters in several different complex flow applications. A zero-dimensional parameter estimation methodology was used in conjunction with a parameter sensitivity study and then applied to three-dimensional flow models. This new parameter estimation technique was applied to three-dimensional models for chemical vapor deposition of silicon carbide and gallium arsenide semiconductor materials. The parameter estimation for silicon carbide for several different operating points was in close agreement to experiment. The parameter estimation for gallium arsenide proved to be very accurate, being within four percent of the experimental data. New parameter estimation algorithms were likewise created for a three-dimensional multiphase model for methanol spray combustion. The kinetic rate parameters delivered results in close agreement to experiment for profiles of combustion species products. In addition, a new parameter estimation method for the determination of spray droplet sizes and velocities is presented. The results for methanol combustion chemical species profiles are in good agreement to experiment for several different droplet sizes. Lastly, the parameter estimation method was extended to a bio-kinetic application, namely mitochondrial cells, that are cardiac or respiratory cells found in animals and humans. The results for the
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
Astrophysical Parameter Estimation for Gaia using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Tiede, C.; Smith, K.; Bailer-Jones, C. A. L.
2008-08-01
Gaia is the next astrometric mission from ESA and will measure objects up to a magnitude of about G=20. Depending on the kind of object (which will be determined automatically because Gaia does not hold an input catalogue), the specific astrophysical parameters will be estimated. The General Stellar Parametrizer (GSP-phot) estimates the astrophysical parameters based on low-dispersion spectra and parallax information for single stars. We show the results of machine learning algorithms trained on simulated data and further developments of the core algorithms which improve the accuracy of the estimated astrophysical parameters.
A Joint Analytic Method for Estimating Aquitard Hydraulic Parameters.
Zhuang, Chao; Zhou, Zhifang; Illman, Walter A
2017-01-10
The vertical hydraulic conductivity (Kv ), elastic (Sske ), and inelastic (Sskv ) skeletal specific storage of aquitards are three of the most critical parameters in land subsidence investigations. Two new analytic methods are proposed to estimate the three parameters. The first analytic method is based on a new concept of delay time ratio for estimating Kv and Sske of an aquitard subject to long-term stable, cyclic hydraulic head changes at boundaries. The second analytic method estimates the Sskv of the aquitard subject to linearly declining hydraulic heads at boundaries. Both methods are based on analytical solutions for flow within the aquitard, and they are jointly employed to obtain the three parameter estimates. This joint analytic method is applied to estimate the Kv , Sske , and Sskv of a 34.54-m thick aquitard for which the deformation progress has been recorded by an extensometer located in Shanghai, China. The estimated results are then calibrated by PEST (Doherty 2005), a parameter estimation code coupled with a one-dimensional aquitard-drainage model. The Kv and Sske estimated by the joint analytic method are quite close to those estimated via inverse modeling and performed much better in simulating elastic deformation than the estimates obtained from the stress-strain diagram method of Ye and Xue (2005). The newly proposed joint analytic method is an effective tool that provides reasonable initial values for calibrating land subsidence models.
Parameter Estimation for Sensorless Controlled Induction Motors using Nonlinear Filters
NASA Astrophysics Data System (ADS)
Hozuki, Takashi; Kawabata, Yoshitaka; Sugimoto, Sueo
In this paper, we consider parameter estimation of the state variables and unknown parameters of Induction Motors (IMs) using nonlinear filters. Simultaneous estimation is the most general method for sensorless controlled IMs, and at present, by the advance of computer processors, nonlinear filters have been applied to various occasions, so we describe the method for applying nonlinear filters to Induction Motors model, and consider its estimate performance by simulations. Simulation results showed that nonlinear filters have more accuracy estimate performance than the adaptive observer, and the excellent noise immunity.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
A variational approach to parameter estimation in ordinary differential equations
2012-01-01
Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields. PMID:22892133
Parameter Estimation for the Thurstone Case III Model.
ERIC Educational Resources Information Center
Mackay, David B.; Chaiy, Seoil
1982-01-01
The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)
Estimation of sonic layer depth from surface parameters
NASA Astrophysics Data System (ADS)
Jain, Sarika; Ali, M. M.; Sen, P. N.
2007-09-01
Sonic layer depth (SLD), an important parameter in underwater acoustics, is the near surface depth of first maxima of the sound speed in the ocean. The lack of direct observations of vertical profiles of velocimeters or temperature and salinity, from which sound speed and SLD can be calculated, hampers the investigation of SLD. In this study, we demonstrate SLD estimation using artificial neural network (ANN) from surface measurements that can be replaced with satellite observations later. Surface and subsurface measurements from a central Arabian Sea mooring are used for this purpose. The estimated SLD had a root mean square error (correlation coefficient) of 11.83 m (0.84). Approximately 76% (91%) of estimations lie within +/-10 m (+/-20 m). SLD has also been estimated from surface parameters using multiple regression technique (MRT). ANN proved its superiority over MRT in estimating SLD from surface parameters.
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
On a variational approach to some parameter estimation problems
NASA Technical Reports Server (NTRS)
Banks, H. T.
1985-01-01
Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.
Numerical Testing of Parameterization Schemes for Solving Parameter Estimation Problems
2008-12-01
1 NUMERICAL TESTING OF PARAMETERIZATION SCHEMES FOR SOLVING PARAMETER ESTIMATION PROBLEMS L. Velázquez*, M. Argáez and C. Quintero The...performance computing (HPC). 1. INTRODUCTION In this paper we present the numerical performance of three parameterization approaches, SVD...wavelets, and the combination of wavelet-SVD for solving automated parameter estimation problems based on the SPSA described in previous reports of this
Identification of Neurofuzzy models using GTLS parameter estimation.
Jakubek, Stefan; Hametner, Christoph
2009-10-01
In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application.
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems.
Daigle, Bernie J; Roh, Min K; Petzold, Linda R; Niemi, Jarad
2012-05-01
A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs). MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM(2)): an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM) algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM(2) substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods. This work provides a novel, accelerated version
NASA Astrophysics Data System (ADS)
Lowenthal, Francis
2010-11-01
This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.
Parameter estimation of gravitational wave compact binary coalescences
NASA Astrophysics Data System (ADS)
Haster, Carl-Johan; LIGO Scientific Collaboration Collaboration
2017-01-01
The first detections of gravitational waves from coalescing binary black holes have allowed unprecedented inference on the astrophysical parameters of such binaries. Given recent updates in detector capabilities, gravitational wave model templates and data analysis techniques, in this talk I will describe the prospects of parameter estimation of compact binary coalescences during the second observation run of the LIGO-Virgo collaboration.
A comparison of approximate interval estimators for the Bernoulli parameter
NASA Technical Reports Server (NTRS)
Leemis, Lawrence; Trivedi, Kishor S.
1993-01-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.
Potential Improvements for HEC-HMS Automated Parameter Estimation
2006-08-01
e.g., flow, baseflow , quickflow, volume aggregations), comprise separate components of a composite global objective function). 5. Objective functions...Marquardt-Levenberg (GML) method of computer-based parame- ter estimation are described and demonstrated as potential improvements to existing HEC-HMS...automatic calibration capabilities. In contrast to ex- isting HEC-HMS automated parameter estimation capabilities, these methods support global
A Simple Technique for Estimating Latent Trait Mental Test Parameters
ERIC Educational Resources Information Center
Jensema, Carl
1976-01-01
A simple and economical method for estimating initial parameter values for the normal ogive or logistic latent trait mental test model is outlined. The accuracy of the method in comparison with maximum likelihood estimation is investigated through the use of Monte-Carlo data. (Author)
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
A Comparison of Approximate Interval Estimators for the Bernoulli Parameter
1993-12-01
The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate...is appropriate for certain sample sizes and point estimators. Confidence interval , Binomial distribution, Bernoulli distribution, Poisson distribution.
Application of MCSFilter to estimate stiction control valve parameters
NASA Astrophysics Data System (ADS)
Amador, Andreia; Fernandes, Florbela P.; Santos, Lino O.; Romanenko, Andrey
2017-07-01
The mitigation of the stiction phenomena in control valves is of paramount importance for efficient industrial plant operation. Mathematical models of sticky valves are typically discontinuous and highly nonlinear. A derivative-free optimization method is applied in the context of parameter estimation in order to determine the stiction parameters of a control valve. The method successfully determines the correct parameter set and compares favorably with a previous case study of this problem that used smooth function.
Analyzing and constraining signaling networks: parameter estimation for the user.
Geier, Florian; Fengos, Georgios; Felizzi, Federico; Iber, Dagmar
2012-01-01
The behavior of most dynamical models not only depends on the wiring but also on the kind and strength of interactions which are reflected in the parameter values of the model. The predictive value of mathematical models therefore critically hinges on the quality of the parameter estimates. Constraining a dynamical model by an appropriate parameterization follows a 3-step process. In an initial step, it is important to evaluate the sensitivity of the parameters of the model with respect to the model output of interest. This analysis points at the identifiability of model parameters and can guide the design of experiments. In the second step, the actual fitting needs to be carried out. This step requires special care as, on the one hand, noisy as well as partial observations can corrupt the identification of system parameters. On the other hand, the solution of the dynamical system usually depends in a highly nonlinear fashion on its parameters and, as a consequence, parameter estimation procedures get easily trapped in local optima. Therefore any useful parameter estimation procedure has to be robust and efficient with respect to both challenges. In the final step, it is important to access the validity of the optimized model. A number of reviews have been published on the subject. A good, nontechnical overview is provided by Jaqaman and Danuser (Nat Rev Mol Cell Biol 7(11):813-819, 2006) and a classical introduction, focussing on the algorithmic side, is given in Press (Numerical recipes: The art of scientific computing, Cambridge University Press, 3rd edn., 2007, Chapters 10 and 15). We will focus on the practical issues related to parameter estimation and use a model of the TGFβ-signaling pathway as an educative example. Corresponding parameter estimation software and models based on MATLAB code can be downloaded from the authors's web page ( http://www.bsse.ethz.ch/cobi ).
Evaluation of experiments for estimation of dynamical crop model parameters.
Ioslovich, Ilya; Gutman, Per-Olof
2007-07-01
Planned experiments are usually expected to provide maximal benefits within limited costs. However, there are known difficulties in optimal design of experiments. They are related to the case when only limited number of parameters could be estimated, because available experiments are noninformative. The useful method for this case is considered based on the dominant parameters selection procedure (DPS). The methodology is illustrated here with data from five planned experiments related to the NICOLET lettuce growth model. The maximal number and the list of estimated parameters are determined while the conditional number of the information Fisher matrix (modified E-criterion) is kept below a given upper constraint.
Estimation of the input parameters in the Feller neuronal model
NASA Astrophysics Data System (ADS)
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Estimating the macroseismic parameters of earthquakes in eastern Iran
NASA Astrophysics Data System (ADS)
Amini, H.; Gasperini, P.; Zare, M.; Vannucci, G.
2017-10-01
Macroseismic intensity values allow assessing the macroseismic parameters of earthquakes such as location, magnitude, and fault orientation. This information is particularly useful for historical earthquakes whose parameters were estimated with low accuracy. Eastern Iran (56°-62°E, 29.5°-35.5°N), which is characterized by several active faults, was selected for this study. Among all earthquakes occurred in this region, only 29 have some macroseismic information. Their intensity values were reported in various intensity scales. After collecting the descriptions, their intensity values were re-estimated in a uniform intensity scale. Thereafter, Boxer method was applied to estimate their corresponding macroseismic parameters. Boxer estimates of macroseismic parameters for instrumental earthquakes (after 1964) were found to be consistent with those published by Global Centroid Moment Tensor Catalog (GCMT). Therefore, this method was applied to estimate location, magnitude, source dimension, and orientation of these earthquakes with macroseismic description in the period 1066-2012. Macroseismic parameters seem to be more reliable than instrumental ones not only for historical earthquakes but also for instrumental earthquakes especially for the ones occurred before 1960. Therefore, as final results of this study we propose to use the macroseismically determined parameters in preparing a catalog for earthquakes before 1960.
Sequential ensemble-based optimal design for parameter estimation
NASA Astrophysics Data System (ADS)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R.; Wade, J.G.
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
Dynamic Load Model using PSO-Based Parameter Estimation
NASA Astrophysics Data System (ADS)
Taoka, Hisao; Matsuki, Junya; Tomoda, Michiya; Hayashi, Yasuhiro; Yamagishi, Yoshio; Kanao, Norikazu
This paper presents a new method for estimating unknown parameters of dynamic load model as a parallel composite of a constant impedance load and an induction motor behind a series constant reactance. An adequate dynamic load model is essential for evaluating power system stability, and this model can represent the behavior of actual load by using appropriate parameters. However, the problem of this model is that a lot of parameters are necessary and it is not easy to estimate a lot of unknown parameters. We propose an estimating method based on Particle Swarm Optimization (PSO) which is a non-linear optimization method by using the data of voltage, active power and reactive power measured at voltage sag.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Recursive Implementations of the Consider Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; DSouza, Chris
2012-01-01
One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.
Simulation and parameter estimation of dynamics of synaptic depression.
Aristizabal, F; Glavinovic, M I
2004-01-01
Synaptic release was simulated using a Simulink sequential storage model with three vesicular pools. Modeling was modular and easily extendable to the systems with greater number of vesicular pools, parallel input, or time-varying parameters. Given an input (short or long tetanic trains, patterned or random stimulation) and the storage model, the vesicular release, the replenishment of various vesicular pools, and the vesicular content of all pools could be simulated for the time-invariant and time-varying storage systems. From the input stimuli and either a noiseless or a noisy output, the parameters of such storage systems could also be estimated using the optimization technique that minimizes in the least square sense the error between the observed release and the predicted release. All parameters of the storage model could be evaluated with sufficiently long input-output data pairs. Not surprisingly, the parameters characterizing the processes near the release locus, such as the fractional release and the size of the immediately available pool and its coupling to the small store, as well as the state variables associated with the immediately available pool, such as its vesicular content and replenishment, could be determined with fewer stimuli. The possibility of estimating parameters with random inputs extends the applicability of the method to in vivo synapses with the physiological inputs. The parameter estimation was also possible under the time-variant, but slowly changing, conditions as well as for open systems that are part of larger vesicular storage systems but whose parameters can either not be reliably determined or are of no interest. The quality of parameter estimation was monitored continuously by comparing the observed and predicted output and/or estimated parameters with the true values. Finally, the method was tested experimentally using the rat phrenic-diaphragm neuromuscular junction.
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Evaluation of the Covariance Matrix of Estimated Resonance Parameters
NASA Astrophysics Data System (ADS)
Becker, B.; Capote, R.; Kopecky, S.; Massimi, C.; Schillebeeckx, P.; Sirakov, I.; Volev, K.
2014-04-01
In the resonance region nuclear resonance parameters are mostly obtained by a least square adjustment of a model to experimental data. Derived parameters can be mutually correlated through the adjustment procedure as well as through common experimental or model uncertainties. In this contribution we investigate four different methods to propagate the additional covariance caused by experimental or model uncertainties into the evaluation of the covariance matrix of the estimated parameters: (1) including the additional covariance into the experimental covariance matrix based on calculated or theoretical estimates of the data; (2) including the uncertainty affected parameter in the adjustment procedure; (3) evaluation of the full covariance matrix by Monte Carlo sampling of the common parameter; and (4) retroactively including the additional covariance by using the marginalization procedure of Habert et al.
Simultaneous parameter and state estimation of shear buildings
NASA Astrophysics Data System (ADS)
Concha, Antonio; Alvarez-Icaza, Luis; Garrido, Rubén
2016-03-01
This paper proposes an adaptive observer that simultaneously estimates the damping/mass and stiffness/mass ratios, and the state of a seismically excited building. The adaptive observer uses only acceleration measurements of the ground and floors for both parameter and state estimation; it identifies all the parameter ratios, velocities and displacements of the structure if all the floors are instrumented; and it also estimates the state and the damping/mass and stiffness/mass ratios of a reduced model of the building if only some floors are equipped with accelerometers. This observer does not resort to any particular canonical form and employs the Least Squares (LS) algorithm and a Luenberger state estimator. The LS method is combined with a smooth parameter projection technique that provides only positive estimates, which are employed by the state estimator. Boundedness of the estimate produced by the LS algorithm does not depend on the boundedness of the state estimates. Moreover, the LS method uses a parametrization based on Linear Integral Filters that eliminate offsets in the acceleration measurements in finite time and attenuate high-frequency measurement noise. Experimental results obtained using a reduced-scale five-story confirm the effectiveness of the proposed adaptive observer.
ERIC Educational Resources Information Center
Banreti, Zoltan
2010-01-01
This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about…
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
ERIC Educational Resources Information Center
Banreti, Zoltan
2010-01-01
This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about…
NASA Astrophysics Data System (ADS)
Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.
2015-02-01
In this paper, the optimal least-squares state estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems with state transition and measurement random parameter matrices and correlated noises. It is assumed that at any sampling time, as a consequence of possible failures during the transmission process, one-step delays with different delay characteristics may occur randomly in the received measurements. The random delay phenomenon is modelled by using a different sequence of Bernoulli random variables in each sensor. The process noise and all the sensor measurement noises are one-step autocorrelated and different sensor noises are one-step cross-correlated. Also, the process noise and each sensor measurement noise are two-step cross-correlated. Based on the proposed model and using an innovation approach, the optimal linear filter is designed by a recursive algorithm which is very simple computationally and suitable for online applications. A numerical simulation is exploited to illustrate the feasibility of the proposed filtering algorithm.
Quantiles, Parametric-Select Density Estimations, and Bi-Information Parameter Estimators.
1982-06-01
A non- parametric estimation method forms estimators which are not based on parametric models. Important examples of non-parametric estimators of a...raw descriptive functions F, f, Q, q, fQ. One distinguishes between parametric and non-parametric methods of estimating smooth functions. A parametric ... estimation method : (1) assumes a family F8, fo’ Q0, qo’ foQ8 of functions, called parametric models, which are indexed by a parameter 6 = ( l
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Dynamic algorithm for parameter estimation and its applications
NASA Astrophysics Data System (ADS)
Maybhate, Anil; Amritkar, R. E.
2000-06-01
We consider a dynamic method, based on synchronization and adaptive control, to estimate unknown parameters of a nonlinear dynamical system from a given scalar chaotic time series. We present an important extension of the method when the time series of a scalar function of the variables of the underlying dynamical system is given. We find that it is possible to obtain synchronization as well as parameter estimation using such a time series. We then consider a general quadratic flow in three dimensions and discuss the applicability of our method of parameter estimation in this case. In practical situations one expects only a finite time series of a system variable to be known. We show that the finite time series can be repeatedly used to estimate unknown parameters with an accuracy that improves and then saturates to a constant value with repeated use of the time series. Finally, we suggest an important application of the parameter estimation method. We propose that the method can be used to confirm the correctness of a trial function modeling an external unknown perturbation to a known system. We show that our method produces exact synchronization with the given time series only when the trial function has a form identical to that of the perturbation.
Dynamic simulation and parameter estimation in river streams.
Karadurmus, E; Berber, R
2004-04-01
Predictions and quality management issues for environmental protection in river basins rely on water-quality models. The key step in model calibration and verification is obtaining the right values of the model parameters. Current practice in model calibration is such that the reaction coefficients are adjusted by trial-and-error until the predicted values and measured data are within a pre-selected margin of error, and this may be a very time consuming task. This study is directed towards developing a parameter estimation strategy coupled with the simulation of water quality models so that the heavy burden of finding reaction rate coefficients is overcome. Dynamic mass balances for different forms of nitrogen and phosphorus, biological oxygen demand, dissolved oxygen, coliforms, nonconservative constituent and algae were written for a single computational element. The model parameters conforming to those in QUAL2E water quality model were estimated by a nonlinear multi-response parameter estimation strategy coupled with a stiff integrator. Yesilirmak river basin around the city of Amasya in Turkey served as the prototype system for the model development. Samples were collected simultaneously from two stations, and concentrations of many water-quality constituents were determined either on-site or in laboratory. This dynamic data was then used for numerical parameter estimation during computer simulation. When the model was simulated with the estimated parameters, it was seen that the model was quite able to predict the dynamics of major water quality constituents. It is concluded that the proposed method shows promise for automatically generating reliable estimates of model parameters.
Improving the realism of hydrologic model through multivariate parameter estimation
NASA Astrophysics Data System (ADS)
Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis
2017-04-01
Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10
Analysis of the Second Model Parameter Estimation Experiment Workshop Results
NASA Astrophysics Data System (ADS)
Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.
2002-05-01
The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Evaluating parasite densities and estimation of parameters in transmission systems.
Heinzmann, D; Torgerson, P R
2008-09-01
Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence) or estimates of the parasite biomass within the host population (abundance or intensity estimates). Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.
Rapid estimation of drifting parameters in continuously measured quantum systems
NASA Astrophysics Data System (ADS)
Cortez, Luis; Chantasri, Areeya; García-Pintos, Luis Pedro; Dressel, Justin; Jordan, Andrew N.
2017-01-01
We investigate the determination of a Hamiltonian parameter in a quantum system undergoing continuous measurement. We demonstrate a computationally rapid method to estimate an unknown and possibly time-dependent parameter, where we maximize the likelihood of the observed stochastic readout. By dealing directly with the raw measurement record rather than the quantum-state trajectories, the estimation can be performed while the data are being acquired, permitting continuous tracking of the parameter during slow drifts in real time. Furthermore, we incorporate realistic nonidealities, such as decoherence processes and measurement inefficiency. As an example, we focus on estimating the value of the Rabi frequency of a continuously measured qubit and compare maximum likelihood estimation to a simpler fast Fourier transform. Using this example, we discuss how the quality of the estimation depends on both the strength and the duration of the measurement; we also discuss the trade-off between the accuracy of the estimate and the sensitivity to drift as the estimation duration is varied.
Zhao, Bo; Lam, Fan; Liang, Zhi-Pei
2014-01-01
MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520
Cramer-Rao bound on watermark desynchronization parameter estimation accuracy
NASA Astrophysics Data System (ADS)
Sadasivam, Shankar; Moulin, Pierre
2007-02-01
Various decoding algorithms have been proposed in the literature to combat desynchronization attacks on quantization index modulation (QIM) blind watermarking schemes. Nevertheless, these results have been fairly poor so far. The need to investigate fundamental limitations on the decoder's performance under a desynchronization attack is thus clear. In this paper, we look at the class of estimator-decoders which estimate the desynchronization attack parameter(s) for using in the decoding step. We model the desynchronization attack as an arbitrary (but invertible) linear time-invariant (LTI) system. We then come up with an encoding-decoding scheme for these attacks on cubic QIM watermarking schemes, and derive Cramer-Rao bounds on the estimation error for the desynchronization parameter at the decoder. As an example, we consider the case of a cyclic shift attack and present some numerical findings.
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
NASA Astrophysics Data System (ADS)
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
Adaptable recursive binary entropy coding technique
NASA Astrophysics Data System (ADS)
Kiely, Aaron B.; Klimesh, Matthew A.
2002-07-01
We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.
Toward predictive food process models: A protocol for parameter estimation.
Vilas, Carlos; Arias-Méndez, Ana; García, Míriam R; Alonso, Antonio A; Balsa-Canto, E
2016-05-31
Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.
AMT-200S Motor Glider Parameter and Performance Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2011-01-01
Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Inversion of canopy reflectance models for estimation of vegetation parameters
NASA Technical Reports Server (NTRS)
Goel, Narendra S.
1987-01-01
One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.
Parameter estimation for fractional transport: A particle-tracking approach
NASA Astrophysics Data System (ADS)
Chakraborty, Paramita; Meerschaert, Mark M.; Lim, Chae Young
2009-10-01
Space-fractional advection-dispersion models provide attractive alternatives to the classical advection-dispersion equation for model applications that exhibit early arrivals and plume skewness. This paper develops a flexible method for estimating the parameters of the fractional transport model on the basis of spatial plume snapshots or temporal breakthrough curve data. A particle-tracking approach provides error bars for the parameter estimates and a general method for model fitting and comparison via optimal weighted least squares. A simple model of concentration variance, based on the particle-tracking approach, identifies the optimal weights.
Estimation of octanol/water partition coefficients using LSER parameters
Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.
1998-01-01
The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.
Parameter estimation of stable distribution based on zero - order statistics
NASA Astrophysics Data System (ADS)
Chen, Jian; Chen, Hong; Cai, Xiaoxia; Weng, Pengfei; Nie, Hao
2017-08-01
With the increasing complexity of the channel, there are many impulse noise signals in the real channel. The statistical properties of such processes are significantly deviated from the Gaussian distribution, and the Alpha stable distribution provides a very useful theoretical tool for this process. This paper focuses on the parameter estimation method of the Alpha stable distribution. First, the basic theory of Alpha stable distribution is introduced. Then, the concept of logarithmic moment and geometric power are proposed. Finally, the parameter estimation of Alpha stable distribution is realized based on zero order statistic (ZOS). This method has better toughness and precision.
Maximum likelihood estimation for distributed parameter models of flexible spacecraft
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Williams, J. L.
1989-01-01
A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.
Estimation of regional pulmonary perfusion parameters from microfocal angiograms
Clough, A.V.; Al-Tinawi, A.; Linehan, J.H.; Dawson, C.A. |
1995-12-31
An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretic methods for determining regional blood flow, volume and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time-absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.
Estimation of regional pulmonary perfusion parameters from microfocal angiograms
NASA Astrophysics Data System (ADS)
Clough, Anne V.; Al-Tinawi, Amir; Linehan, John H.; Dawson, Christopher A.
1995-05-01
An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretical methods for determining regional blood flow, volume, and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries, and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time- absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Estimation and Analysis of Parameters for Reference Frame Transformation
NASA Astrophysics Data System (ADS)
Yang, T. G.; Gao, Y. P.; Tong, M. L.; Zhao, C. S.; Gao, F.
2016-07-01
Based on the estimation method of parameters for reference frame transformation, the parameters used for transformation between different modern DE (Develop-ment Ephemeris) ephemeris pairs are derived using the data of heliocentric coordinates of Earth-Moon barycenter from DE ephemeris pairs, and the transformation parameters between DE ephemeris dynamic reference frame and ICRF (International Celestial Reference Frame) are estimated by using the timing data and VLBI (Very Long Baseline Interferometry) observation results of millisecond pulsars. The estimated parameters for the reference frame transformation include three rotational angles of rotational matrix and their derivatives of time. The reference epoch of estimated parameters for the reference frame transformation is MJD51545, that is J2000.0. Our results show that the absolute maximum value of rotational angles for the transformation of DE200 to DE405 ephemeris is 13 mas, and its derivative of time is -0.0007 mas/d. No absolute value of rotational angles is larger than 0.1 mas for the transformation of DE414 to DE421 ephemeris. The absolute maximum value of rotational angles of rotation matrix for the transformation of DE421 ephemeris to ICRF is 3 mas, and the time derivatives of three rotational angles are also necessarily included.
Piecewise parameter estimation for stochastic models in COPASI.
Bergmann, Frank T; Sahle, Sven; Zimmer, Christoph
2016-05-15
Computational modeling is widely used for deepening the understanding of biological processes. Parameterizing models to experimental data needs computationally efficient techniques for parameter estimation. Challenges for parameter estimation include in general the high dimensionality of the parameter space with local minima and in specific for stochastic modeling the intrinsic stochasticity. We implemented the recently suggested multiple shooting for stochastic systems (MSS) objective function for parameter estimation in stochastic models into COPASI. This MSS objective function can be used for parameter estimation in stochastic models but also shows beneficial properties when used for ordinary differential equation models. The method can be applied with all of COPASI's optimization algorithms, and can be used for SBML models as well. The methodology is available in COPASI as of version 4.15.95 and can be downloaded from http://www.copasi.org frank.bergmann@bioquant.uni-heidelberg.de or fbergman@caltech.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Adjustment of Sensor Locations During Thermal Property Parameter Estimation
NASA Technical Reports Server (NTRS)
Milos, Frank S.; Marschall, Jochen; Rasky, Daniel J. (Technical Monitor)
1996-01-01
The temperature dependent thermal properties of a material may be evaluated from transient temperature histories using nonlinear parameter estimation techniques. The usual approach is to minimize the sum of the squared errors between measured and calculated temperatures at specific locations in the body. Temperature measurements are usually made with thermocouples and it is customary to take thermocouple locations as known and fixed during parameter estimation computations. In fact, thermocouple locations are never known exactly. Location errors on the order of the thermocouple wire diameter are intrinsic to most common instrumentation procedures (e.g., inserting a thermocouple into a drilled hole) and additional errors can be expected for delicate materials, difficult installations, large thermocouple beads, etc.. Thermocouple location errors are especially significant when estimating thermal properties of low diffusively materials which can sustain large temperature gradients during testing. In the present work, a parameter estimation formulation is presented which allows for the direct inclusion of thermocouple positions into the primary parameter estimation procedure. It is straightforward to set bounds on thermocouple locations which exclude non-physical locations and are consistent with installation tolerances. Furthermore, bounds may be tightened to an extent consistent with any independent verification of thermocouple location, such as x-raying, and so the procedure is entirely consonant with experimental information. A mathematical outline of the procedure is given and its implementation is illustrated through numerical examples characteristic of light-weight, high-temperature ceramic insulation during transient heating. The efficacy and the errors associated with the procedure are discussed.
Estimation of effective hydrogeological parameters in heterogeneous and anisotropic aquifers
NASA Astrophysics Data System (ADS)
Lin, Hsien-Tsung; Tan, Yih-Chi; Chen, Chu-Hui; Yu, Hwa-Lung; Wu, Shih-Ching; Ke, Kai-Yuan
2010-07-01
SummaryObtaining reasonable hydrological input parameters is a key challenge in groundwater modeling. Analysis of temporal evolution during pump-induced drawdown is one common approach used to estimate the effective transmissivity and storage coefficients in a heterogeneous aquifer. In this study, we propose a Modified Tabu search Method (MTM), an improvement drawn from an alliance between the Tabu Search (TS) and the Adjoint State Method (ASM) developed by Tan et al. (2008). The latter is employed to estimate effective parameters for anisotropic, heterogeneous aquifers. MTM is validated by several numerical pumping tests. Comparisons are made to other well-known techniques, such as the type-curve method (TCM) and the straight-line method (SLM), to provide insight into the challenge of determining the most effective parameter for an anisotropic, heterogeneous aquifer. The results reveal that MTM can efficiently obtain the best representative and effective aquifer parameters in terms of the least mean square errors of the drawdown estimations. The use of MTM may involve less artificial errors than occur with TCM and SLM, and lead to better solutions. Therefore, effective transmissivity is more likely to be comprised of the geometric mean of all transmissivities within the cone of depression based on a precise estimation of MTM. Further investigation into the applicability of MTM shows that a higher level of heterogeneity in an aquifer can induce an uncertainty in estimations, while the changes in correlation length will affect the accuracy of MTM only once the degree of heterogeneity has also risen.
Adaptive Detection and Parameter Estimation for Multidimensional Signal Models
1989-04-19
expected value of the non-adaptive parameter array estimator directly from Equation (5-1), using the fact that .zP = dppH = d We obtain EbI = (e-H E eI 1...depend only on the dimensional parameters of tlc problem. We will caerive these properties shcrLly, but first we wish to express the conditional pdf
Human ECG signal parameters estimation during controlled physical activity
NASA Astrophysics Data System (ADS)
Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz
2015-09-01
ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special
Estimation of Parameters in Latent Class Models with Constraints on the Parameters.
1986-06-01
parameter. This rules out models which characterize each Pkj in terms of conjoint effects of item and state parameters, as the Rasch model does, for... example . It also rules out models that impose ordering constraints on the Pkj’S. Thus, while many interesting models can be cast in terms of equality...ONR86-1 m 4 4. TITLE (mnd Subtitle) S. TYPE OF REPORT & PERIOD COVERED Estimation of Parameters in Latent Class Models with Constraints on the
Estimation of Weibull parameters from parameters of initial distribution of flaw size
NASA Astrophysics Data System (ADS)
Wakabayashi, C.; Yasuda, K.; Shiota, T.
2009-11-01
The distribution of the largest flaw size is derived from the initial distribution of flaw size based on extreme value statistics, and also the distribution of fracture origin size is given by transforming Weibull distribution by fracture mechanical relation. These two distributions are equivalent under uniaxial loading. By using this relation, their parameters are related each other and Weibull parameters are estimated from the parameters of the initial distribution of flaw size and the number of links.
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
Recursions for statistical multiple alignment
Hein, Jotun; Jensen, Jens Ledet; Pedersen, Christian N. S.
2003-01-01
Algorithms are presented that allow the calculation of the probability of a set of sequences related by a binary tree that have evolved according to the Thorne–Kishino–Felsenstein model for a fixed set of parameters. The algorithms are based on a Markov chain generating sequences and their alignment at nodes in a tree. Depending on whether the complete realization of this Markov chain is decomposed into the first transition and the rest of the realization or the last transition and the first part of the realization, two kinds of recursions are obtained that are computationally similar but probabilistically different. The running time of the algorithms is \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}O({\\Pi}_{i}^{d}=1~L_{i})\\end{equation*}\\end{document}, where Li is the length of the ith observed sequences and d is the number of sequences. An alternative recursion is also formulated that uses only a Markov chain involving the inner nodes of a tree. PMID:14657378
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
SCoPE: an efficient method of Cosmological Parameter Estimation
Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in
2014-07-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.
Estimation of Saxophone Control Parameters by Convex Optimization
Wang, Cheng-i; Smyth, Tamara; Lipton, Zachary C.
2015-01-01
In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value. PMID:27754493
Simunek, J.; Nimmo, J.R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Loss of Information in Estimating Item Parameters in Incomplete Designs
ERIC Educational Resources Information Center
Eggen, Theo J. H. M.; Verelst, Norman D.
2006-01-01
In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…
Loss of Information in Estimating Item Parameters in Incomplete Designs
ERIC Educational Resources Information Center
Eggen, Theo J. H. M.; Verelst, Norman D.
2006-01-01
In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…
Estimation of coefficients and boundary parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Murphy, K. A.
1984-01-01
Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.
A parameter estimation framework for patient-specific hemodynamic computations
NASA Astrophysics Data System (ADS)
Itu, Lucian; Sharma, Puneet; Passerini, Tiziano; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin
2015-01-01
We propose a fully automated parameter estimation framework for performing patient-specific hemodynamic computations in arterial models. To determine the personalized values of the windkessel models, which are used as part of the geometrical multiscale circulation model, a parameter estimation problem is formulated. Clinical measurements of pressure and/or flow-rate are imposed as constraints to formulate a nonlinear system of equations, whose fixed point solution is sought. A key feature of the proposed method is a warm-start to the optimization procedure, with better initial solution for the nonlinear system of equations, to reduce the number of iterations needed for the calibration of the geometrical multiscale models. To achieve these goals, the initial solution, computed with a lumped parameter model, is adapted before solving the parameter estimation problem for the geometrical multiscale circulation model: the resistance and the compliance of the circulation model are estimated and compensated. The proposed framework is evaluated on a patient-specific aortic model, a full body arterial model, and multiple idealized anatomical models representing different arterial segments. For each case it leads to the best performance in terms of number of iterations required for the computational model to be in close agreement with the clinical measurements.
REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION
This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...
Online vegetation parameter estimation using passive microwave remote sensing observations
USDA-ARS?s Scientific Manuscript database
In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
NASA Astrophysics Data System (ADS)
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Cubic spline approximation techniques for parameter estimation in distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Crowley, J. M.; Kunisch, K.
1983-01-01
Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.
Parameter estimation and infiltration tests at the repeat facility
NASA Astrophysics Data System (ADS)
Burns, P.; Armstrong, P.; Winn, B.
1983-11-01
Work performed in the reconfigurable passive evaluation analysis and test (REPEAT) facility is reviewed. The physical characteristics of the building and the instrumentation are described. Collected data are discussed. Treatment of parameter estimation ensures with example calculations. Infiltration instrumentation and tests are described. Flow visualization studies are discussed.
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Parameter identifiability and estimation of HIV/AIDS dynamic models.
Wu, Hulin; Zhu, Haihong; Miao, Hongyu; Perelson, Alan S
2008-04-01
We use a technique from engineering (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005) to investigate the algebraic identifiability of a popular three-dimensional HIV/AIDS dynamic model containing six unknown parameters. We find that not all six parameters in the model can be identified if only the viral load is measured, instead only four parameters and the product of two parameters (N and lambda) are identifiable. We introduce the concepts of an identification function and an identification equation and propose the multiple time point (MTP) method to form the identification function which is an alternative to the previously developed higher-order derivative (HOD) method (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005). We show that the newly proposed MTP method has advantages over the HOD method in the practical implementation. We also discuss the effect of the initial values of state variables on the identifiability of unknown parameters. We conclude that the initial values of output (observable) variables are part of the data that can be used to estimate the unknown parameters, but the identifiability of unknown parameters is not affected by these initial values if the exact initial values are measured with error. These noisy initial values only increase the estimation error of the unknown parameters. However, having the initial values of the latent (unobservable) state variables exactly known may help to identify more parameters. In order to validate the identifiability results, simulation studies are performed to estimate the unknown parameters and initial values from simulated noisy data. We also apply the proposed methods to a clinical data set
1981-11-01
Press. Debreu , Gerard , [19591, The Theory of Value, John Wiley & Sons, New York. "Continuity Properties of Paretian Utility," L1964], International...infinite cardinality, as found in the traditional treatment of Paretian utility as set forth by Debreu [7] II. Rational Choice Functions Typically, one...setting for the problem of consumer choice in economic theory ( Debreu , [1959], Ch.IV). This capability is obtained by means of the concept of a recursive
NASA Astrophysics Data System (ADS)
Zhu, Zhiliang; Meng, Zhiqiang; Cao, Tingting; Zhang, Zhengjiang; Dai, Yuxing
2017-06-01
State and parameter estimation (SPE) plays an important role in process monitoring, online optimization, and process control. The estimation of states and parameters is generally solved simultaneously in the SPE problem, where the parameters to be estimated are specified as augmented states. When state and/or measurement equations are highly nonlinear and the posterior probability of the state is non-Gaussian, particle filter (PF) is commonly used for SPE. However, when the parameters switch with the operating conditions, the change of parameters cannot be detected and tracked by the conventional SPE method. This paper proposes a PF-based robust SPE method for a nonlinear process system with variable parameters. The measurement test criterion based on observation error is introduced to indirectly identify whether the parameters are changed. Based on the result of identification, the variances of the particles are modified adaptively for the tracking of the changed parameters. Finally, reliable SPE can be derived through iterative particles. The proposed PF-based robust SPE method is applied to two nonlinear process systems. The results demonstrate the effectiveness and robustness of the proposed method.
Estimation of uncertain material parameters using modal test data
Veers, P.S.; Laird, D.L.; Carne, T.G.; Sagartz, M.J.
1997-11-01
Analytical models of wind turbine blades have many uncertainties, particularly with composite construction where material properties and cross-sectional dimension may not be known or precisely controllable. In this paper the authors demonstrate how modal testing can be used to estimate important material parameters and to update and improve a finite-element (FE) model of a prototype wind turbine blade. An example of prototype blade is used here to demonstrate how model parameters can be identified. The starting point is an FE model of the blade, using best estimates for the material constants. Frequencies of the lowest fourteen modes are used as the basis for comparisons between model predictions and test data. Natural frequencies and mode shapes calculated with the FE model are used in an optimal test design code to select instrumentation (accelerometer) and excitation locations that capture all the desired mode shapes. The FE model is also used to calculate sensitivities of the modal frequencies to each of the uncertain material parameters. These parameters are estimated, or updated, using a weighted least-squares technique to minimize the difference between test frequencies and predicted results. Updated material properties are determined for axial, transverse, and shear moduli in two separate regions of the blade cross section: in the central box, and in the leading and trailing panels. Static FE analyses are then conducted with the updated material parameters to determine changes in effective beam stiffness and buckling loads.
Inverse estimation of parameters for an estuarine eutrophication model
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.
Parameter estimation of an air-bearing suspended test table
NASA Astrophysics Data System (ADS)
Fu, Zhenxian; Lin, Yurong; Liu, Yang; Chen, Xinglin; Chen, Fang
2015-02-01
A parameter estimation approach is proposed for parameter determination of a 3-axis air-bearing suspended test table. The table is to provide a balanced and frictionless environment for spacecraft ground test. To balance the suspension, the mechanical parameters of the table, including its angular inertias and centroid deviation from its rotating center, have to be determined first. Then sliding masses on the table can be adjusted by stepper motors to relocate the centroid of the table to its rotating center. Using the angular momentum theorem and the coriolis theorem, dynamic equations are derived describing the rotation of the table under the influence of gravity imbalance torque and activating torques. To generate the actuating torques, use of momentum wheels is proposed, whose virtue is that no active control is required to the momentum wheels, which merely have to spin at constant rates, thus avoiding the singularity problem and the difficulty of precisely adjusting the output torques, issues associated with control moment gyros. The gyroscopic torques generated by the momentum wheels, as they are forced by the table to precess, are sufficient to activate the table for parameter estimation. Then least-square estimation is be employed to calculate the desired parameters. The effectiveness of the method is validated by simulation.
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Mitra, Sanjit; Paulson, Sonu Tabitha E-mail: sanjit@iucaa.ernet.in
2015-03-01
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the PLANCK likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window functions obtained via Monte Carlo simulations starting from a fiducial spectrum, as implemented in PLANCK analyses for example, leads to only few percent of sigma deviation of the best-fit parameters. However, we notice more significant differences in the posterior distributions for some of the parameters, which would in turn lead to incorrect errorbars. These differences can be reduced, so that the errorbars match within few percent, by adding an iterative reanalysis step, where the beam window function would be recomputed using the best-fit spectrum estimated in the first step.
Matched filtering and parameter estimation of ringdown waveforms
Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglia, Marco
2007-11-15
Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to {approx}10{sup 3}M{sub {center_dot}} out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than {approx}10{sup 6} templates would be needed for a single-stage multimode search. Therefore, we recommend a ''two-stage'' search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.
Matched filtering and parameter estimation of ringdown waveforms
NASA Astrophysics Data System (ADS)
Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglià, Marco
2007-11-01
Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to ˜103M⊙ out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole’s mass and spin. We estimate that more than ˜106 templates would be needed for a single-stage multimode search. Therefore, we recommend a “two-stage” search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.
ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and
Parameter estimation for flow in heterogeneous unsaturated porous media
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Neuweiler, Insa
2010-05-01
The unsaturated zone is an important part of the hydrologic cycle and in modeling of large systems it provides the important link between the land surfaces and groundwater systems. One of the problems when modeling water fluxes in the unsaturated zone is to estimate the model parameters from observations. Due to heterogeneities of the soil, these parameters depend on length scale. Furthermore, even if a perfect measurement of a soil parameter would be available, the difference in scale between measurement and model would still cause a need to upscale the measurements into effective parameters. Given certain properties of the soil structure, this study looks at how much measurement data is required to make a good estimation of the effective parameters for a flow scenario in the unsaturated zone. The estimation of local and effective parameters is done within a Bayesian framework, using a Markov Chain Monte Carlo (MCMC) sampling strategy. MCMC methods have the advantage of not only giving best estimates of parameters, but also provide the full distribution of the estimate, hence making uncertainties and eventual multimodalities easily accessible. In this study the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al. 2008) is used. For the study, data from lab-scale drainage experiments in heterogeneous sand columns of M. Vasin (Vasin et al 2008) are used. In the experiments, the depth averaged water content in two sand columns with different heterogeneous structure was monitored during successive drainage steps using neutron radiography. We estimate the flow parameters for the columns taking successively observations into account. In particular we integrate observations of spatially averaged water content. Results will be presented and discussed on the poster. References Vasin M, Lehmann P, Kaestner A, Hassanein R, Nowak W, Helmig R, Neuweiler I. Drainage in heterogeneous sand columns with different geometric structures. Adv Water Resources 2008
ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS
NASA Technical Reports Server (NTRS)
Putney, B.
1994-01-01
The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and
NASA Astrophysics Data System (ADS)
Hou, Hsieh-Sheng
1991-12-01
Among the various image data compression methods, the discrete cosine transform (DCT) has become the most popular in performing gray-scale image compression and decomposition. However, the computational burden in performing a DCT is heavy. For example, in a regular DCT, at least 11 multiplications are required for processing an 8 X 1 image block. The idea of the scaled-DCT is that more than half the multiplications in a regular DCT are unnecessary, because they can be formulated as scaling factors of the DCT coefficients, and these coefficients may be scaled back in the quantization process. A fast recursive algorithm for computing the scaled-DCT is presented in this paper. The formulations are derived based on practical considerations of applying the scaled-DCT algorithm to image data compression and decompression. These include the considerations of flexibility of processing different sizes of DCT blocks and the actual savings of the required number of arithmetic operations. Due to the recursive nature of this algorithm, a higher-order scaled-DCT can be obtained from two lower-order scaled DCTs. Thus, a scaled-DCT VLSI chip designed according to this algorithm may process different sizes of DCT under software control. To illustrate the unique properties of this recursive scaled-DCT algorithm, the one-dimensional formulations are presented with several examples exhibited in signal flow-graph forms.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.
Boonvisut, Pasu; Jackson, Russell; Cavuşoğlu, M Cenk
2012-12-31
Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic; because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.
Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.
Boonvisut, Pasu; Cavuşoğlu, M Cenk
2013-10-01
Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
NASA Astrophysics Data System (ADS)
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-01
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Modal parameters estimation using ant colony optimisation algorithm
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2016-08-01
The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Prediction and simulation errors in parameter estimation for nonlinear systems
NASA Astrophysics Data System (ADS)
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Optimal estimation of parameters of an entangled quantum state
NASA Astrophysics Data System (ADS)
Virzì, S.; Avella, A.; Piacentini, F.; Gramegna, M.; Brida, G.; Degiovanni, I. P.; Genovese, M.
2017-05-01
Two-photon entangled quantum states are a fundamental tool for quantum information and quantum cryptography. A complete description of a generic quantum state is provided by its density matrix: the technique allowing experimental reconstruction of the density matrix is called quantum state tomography. Entangled states density matrix reconstruction requires a large number of measurements on many identical copies of the quantum state. An alternative way of certifying the amount of entanglement in two-photon states is represented by the estimation of specific parameters, e.g., negativity and concurrence. If we have a priori partial knowledge of our state, it’s possible to develop several estimators for these parameters that require lower amount of measurements with respect to full density matrix reconstruction. The aim of this work is to introduce and test different estimators for negativity and concurrence for a specific class of two-photon states.
Advances in Parameter Estimation and Data Assimilation for Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Sorooshian, S.
2001-12-01
In the past two decades, the availability of new data sources (particularly remotely sensed information) and improved computational tools has resulted in significant developments in the field of hydrologic modeling, from simple flow models to complex numerical models which simulate the coupled behavior of multiple fluxes (hydrologic, chemical, energy, etc.). At the same time, significant improvements have taken place in the data assimilation and parameter estimation methods. Although the increasing complexity of models has outpaced the development of appropriate systems identification methodologies, there is a need to design models that are properly constrained by observational data. Independent and collaborative research efforts by various groups worldwide have led to improved modeling techniques, optimization methods for parameter estimation, methods for estimating predictive uncertainty, and methods for evaluating the relative merits of competing models. This talk will review some of the key developments during the past 20 years and speculate on future directions.
Estimation of the sea surface's two-scale backscatter parameters
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1978-01-01
The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Incorporating engineering intuition for parameter estimation in thermal sciences
NASA Astrophysics Data System (ADS)
Balaji, C.; Reddy, B. Konda; Herwig, H.
2013-12-01
This paper proposes a new method of incorporating priors based on engineering intuition for solving inverse problems. The thesis of this paper is that if an asymptote can be found to a problem in applied sciences or engineering, estimation of parameters can be first done for this asymptotic variant, which in principle should be simpler, since one or more parameters of the original problem may vanish for the asymptotic variant. Even so, by solving the inverse problem associated with the asymptotic variant, estimates of key parameters of the full problem can be obtained. This information can then be quantitatively incorporated as priors in the estimation of parameters for the full version of the problem which we call as prior generation through asymptotic variant. The goal is to see if this methodology will significantly reduce the uncertainties in the resulting estimates. To demonstrate this methodology, the classic problem of unsteady heat transfer from a one dimensional fin is chosen. The inverse problem is posed as the simultaneous estimation of the temperature dependent transfer coefficient (h θ ) and the thermal diffusivity ( α) of the fin material, given experimentally measured temperature-time histories at various locations along the fin. The asymptotic variant θ ( x, t) is the steady state problem where the influence of thermal diffusivity vanishes. Using surrogate temperature data generated from assumed values of h θ , first the asymptotic variant of the problem is solved using the Markov Chain Monte Carlo method in a Bayesian framework to generate an estimate of h θ . The estimate of h θ is then used as an informative prior for solving the inverse problem of determining h θ and α from θ ( x, t), and the effect of prior is quantitatively assessed by performing estimation with and without the prior. Finally, for purposes of validation, in-house experiments have been done where θ ( x, t) is generated using liquid crystal thermography and these data
[Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].
Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng
2015-11-01
We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.
Estimating model parameters in nonautonomous chaotic systems using synchronization
NASA Astrophysics Data System (ADS)
Yang, Xiaoli; Xu, Wei; Sun, Zhongkui
2007-05-01
In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation.
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Estimation of Cometary Rotation Parameters Based on Camera Images
NASA Technical Reports Server (NTRS)
Spindler, Karlheinz
2007-01-01
The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.
Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency
NASA Astrophysics Data System (ADS)
Forghani, A.; Peralta, R. C.
2015-12-01
We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.
Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization
NASA Astrophysics Data System (ADS)
Nitta, Naotaka; Takeda, Naoto
2008-05-01
The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a
Optimal sensor placement for parameter estimation of bridges
NASA Astrophysics Data System (ADS)
Eskew, Edward; Jang, Shinae
2017-04-01
Gathering measurements from a structure can be extremely valuable for tasks such as verifying a numerical model, or structural health monitoring (SHM) to identify changes in the natural frequencies and mode shapes which can be attributed to changes in the system. In most monitoring applications, the number of potential degrees-of-freedom (DOF) for monitoring greatly outnumbers the available sensors. Optimal sensor placement (OSP) is a field of research into different methods for locating the available sensors to gather the optimal measurements. Three common methods of OSP are the effective independence (EI), effective independence driving point residue (EI-DPR), and modal kinetic energy (MKE) methods. However, comparisons of the different OSP methods for SHM applications are limited. In this paper, a comparison of the performance of the three described OSP methods for parameter estimation is performed. Parameter estimation is implemented using modified parameter localization with direct model updating, and added mass quantification utilizing a genetic algorithm (GA). The quantification of the mass addition, using simulated measurements from the sensor networks developed by each OSP method, is compared to provide an evaluation of each OSP methods capability for parameter estimation applications.
Parameter estimation method for blurred cell images from fluorescence microscope
NASA Astrophysics Data System (ADS)
He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin
2016-10-01
Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.
Anisotropic parameter estimation using velocity variation with offset analysis
Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A.
2013-09-09
Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.
Empirical processes with estimated parameters under auxiliary information
NASA Astrophysics Data System (ADS)
Genz, Michael; Haeusler, Erich
2006-02-01
Empirical processes with estimated parameters are a well established subject in nonparametric statistics. In the classical theory they are based on the empirical distribution function which is the nonparametric maximum likelihood estimator for a completely unknown distribution function. In the presence of some "nonparametric" auxiliary information about the distribution, like a known mean or a known median, for example, the nonparametric maximum likelihood estimator is a modified empirical distribution function which puts random masses on the observations in order to take the available information into account [see Owen, Biometrika 75 (1988) 237-249, Ann. Statist. 18 (1990) 90-120, Empirical Likelihood, Chapman & Hall/CRC, London/Boca Raton, FL; Qin and Lawless, Ann. Statist. 22 (1994) 300-325]. Zhang [Metrika 46 (1997) 221-244] has proved a functional central limit theorem for the empirical process pertaining to this modified empirical distribution function. We will consider the corresponding empirical process with estimated parameters here and derive its asymptotic distribution. The limiting process is a centered Gaussian process with a complicated covariance function depending on the unknown parameter. The result becomes useful in practice through the bootstrap, which is shown to be consistent in case of a known mean. The performance of the resulting bootstrap goodness-of-fit test based on the Kolmogorov-Smirnov statistic is studied through simulations.
A novel extended kernel recursive least squares algorithm.
Zhu, Pingping; Chen, Badong; Príncipe, José C
2012-08-01
In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Informed spectral analysis: audio signal parameter estimation using side information
NASA Astrophysics Data System (ADS)
Fourer, Dominique; Marchand, Sylvain
2013-12-01
Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.
Geomagnetic modeling by optimal recursive filtering
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Estes, R. H.
1981-01-01
The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Estimation of atmospheric parameters from time-lapse imagery
NASA Astrophysics Data System (ADS)
McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.
2016-05-01
A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher
ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST
Carlin, Jeffrey L.; Newberg, Heidi Jo; Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong; Beers, Timothy C.; Chen, Li; Hou, Jinliang; Smith, Martin C.; Guhathakurta, Puragra; Lépine, Sébastien; Yanny, Brian; Zheng, Zheng
2015-07-15
We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo; ...
2015-06-05
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less
Terrain mechanical parameters online estimation for lunar rovers
NASA Astrophysics Data System (ADS)
Liu, Bing; Cui, Pingyuan; Ju, Hehua
2007-11-01
This paper presents a new method for terrain mechanical parameters estimation for a wheeled lunar rover. First, after deducing the detailed distribution expressions of normal stress and sheer stress at the wheel-terrain interface, the force/torque balance equations of the drive wheel for computing terrain mechanical parameters is derived through analyzing the rigid drive wheel of a lunar rover which moves with uniform speed in deformable terrain. Then a two-points Guass-Lengendre numerical integral method is used to simplify the balance equations, after simplifying and rearranging the resolve model are derived which are composed of three non-linear equations. Finally the iterative method of Newton and the steepest descent method are combined to solve the non-linear equations, and the outputs of on-board virtual sensors are used for computing terrain key mechanical parameters i.e. internal friction angle and press-sinkage parameters. Simulation results show correctness under high noises disturbance and effectiveness with low computational complexity, which allows a lunar rover for online terrain mechanical parameters estimation.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo; Beers, Timothy C.; Chen, Li; Deng, Licai; Guhathakurta, Puragra; Hou, Jinliang; Hou, Yonghui; Lépine, Sébastien; Li, Guangwei; Luo, A-Li; Smith, Martin C.; Wu, Yue; Yang, Ming; Yanny, Brian; Zhang, Haotong; Zheng, Zheng
2015-06-05
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.
Observable Priors: Limiting Biases in Estimated Parameters for Incomplete Orbits
NASA Astrophysics Data System (ADS)
Kosmo, Kelly; Martinez, Gregory; Hees, Aurelien; Witzel, Gunther; Ghez, Andrea M.; Do, Tuan; Sitarski, Breann; Chu, Devin; Dehghanfar, Arezu
2017-01-01
Over twenty years of monitoring stellar orbits at the Galactic center has provided an unprecedented opportunity to study the physics and astrophysics of the supermassive black hole (SMBH) at the center of the Milky Way Galaxy. In order to constrain the mass of and distance to the black hole, and to evaluate its gravitational influence on orbiting bodies, we use Bayesian statistics to infer black hole and stellar orbital parameters from astrometric and radial velocity measurements of stars orbiting the central SMBH. Unfortunately, most of the short period stars in the Galactic center have periods much longer than our twenty year time baseline of observations, resulting in incomplete orbital phase coverage--potentially biasing fitted parameters. Using the Bayesian statistical framework, we evaluate biases in the black hole and orbital parameters of stars with varying phase coverage, using various prior models to fit the data. We present evidence that incomplete phase coverage of an orbit causes prior assumptions to bias statistical quantities, and propose a solution to reduce these biases for orbits with low phase coverage. The explored solution assumes uniformity in the observables rather than in the inferred model parameters, as is the current standard method of orbit fitting. Of the cases tested, priors that assume uniform astrometric and radial velocity observables reduce the biases in the estimated parameters. The proposed method will not only improve orbital estimates of stars orbiting the central SMBH, but can also be extended to other orbiting bodies with low phase coverage such as visual binaries and exoplanets.
Dynamic state and parameter estimation applied to neuromorphic systems.
Neftci, Emre Ozgur; Toth, Bryan; Indiveri, Giacomo; Abarbanel, Henry D I
2012-07-01
Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool for model-based identification and configuration of neuromorphic multichip VLSI systems.
A fast schema for parameter estimation in diffusion kurtosis imaging
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Liu, Wei; Yang, Guang; Wu, Dongmei; Zhou, Yongdi; Peterson, Bradley S.; Xu, Dongrong
2014-01-01
Diffusion kurtosis imaging (DKI) is a new model in magnetic resonance imaging (MRI) characterizing restricted diffusion of water molecules in living tissues. We propose a method for fast estimation of the DKI parameters. These parameters –apparent diffusion coefficient (ADC) and apparent kurtosis coefficient (AKC) – are evaluated using an alternative iteration schema (AIS). This schema first roughly estimates a pair of ADC and AKC values from a subset of the DKI data acquired at 3 b-values. It then iteratively and alternately updates the ADC and AKC until they are converged. This approach employs the technique of linear least square fitting to minimize estimation error in each iteration. In addition to the common physical and biological constrains that set the upper and lower boundaries of the ADC and AKC values, we use a smoothing procedure to ensure that estimation is robust. Quantitative comparisons between our AIS methods and the conventional methods of unconstrained nonlinear least square (UNLS) using both synthetic and real data showed that our unconstrained AIS method can significantly accelerate the estimation procedure without compromising its accuracy, with the computational time for a DKI dataset successfully reduced to only one or two minutes. Moreover, the incorporation of the smoothing procedure using one of our AIS methods can significantly enhance the contrast of AKC maps and greatly improve the visibility of details in fine structures. PMID:25016957
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Beef quality parameters estimation using ultrasound and color images.
Nunes, Jose; Piquerez, Martín; Pujadas, Leonardo; Armstrong, Eileen; Fernández, Alicia; Lecumberry, Federico
2015-01-01
Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat.
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
Parameter Estimation as a Problem in Statistical Thermodynamics
NASA Astrophysics Data System (ADS)
Earle, Keith A.; Schneider, David J.
2011-03-01
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Estimation of economic parameters of U.S. hydropower resources
Hall, Douglas G.; Hunt, Richard T.; Reeves, Kelly S.; Carroll, Greg R.
2003-06-01
Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”
Maximum-likelihood estimation of circle parameters via convolution.
Zelniker, Emanuel E; Clarkson, I Vaughan L
2006-04-01
The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images.
Parameter estimation in X-ray astronomy using maximum likelihood
NASA Technical Reports Server (NTRS)
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
PYMORPH: automated galaxy structural parameter estimation using PYTHON
NASA Astrophysics Data System (ADS)
Vikram, Vinu; Wadadekar, Yogesh; Kembhavi, Ajit K.; Vijayagovindan, G. V.
2010-12-01
We present a new software pipeline - PYMORPH- for automated estimation of structural parameters of galaxies. Both parametric fits through a two-dimensional bulge disc decomposition and structural parameter measurements like concentration, asymmetry etc. are supported. The pipeline is designed to be easy to use yet flexible; individual software modules can be replaced with ease. A find-and-fit mode is available so that all galaxies in an image can be measured with a simple command. A parallel version of the PYMORPH pipeline runs on computer clusters and a virtual observatory compatible web enabled interface is under development.
Estimation of the parameters of ETAS models by Simulated Annealing.
Lombardi, Anna Maria
2015-02-12
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of the parameters of ETAS models by Simulated Annealing
NASA Astrophysics Data System (ADS)
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036
Estimation of drying parameters in rotary dryers using differential evolution
NASA Astrophysics Data System (ADS)
Lobato, F. S.; Steffen, V., Jr.; Arruda, E. B.; Barrozo, M. A. S.
2008-11-01
Inverse problems arise from the necessity of obtaining parameters of theoretical models to simulate the behavior of the system for different operating conditions. Several heuristics that mimic different phenomena found in nature have been proposed for the solution of this kind of problem. In this work, the Differential Evolution Technique is used for the estimation of drying parameters in realistic rotary dryers, which is formulated as an optimization problem by using experimental data. Test case results demonstrate both the feasibility and the effectiveness of the proposed methodology.
Systematic parameter estimation for PEM fuel cell models
NASA Astrophysics Data System (ADS)
Carnes, Brian; Djilali, Ned
The problem of parameter estimation is considered for the case of mathematical models for polymer electrolyte membrane fuel cells (PEMFCs). An algorithm for nonlinear least squares constrained by partial differential equations is defined and applied to estimate effective membrane conductivity, exchange current densities and oxygen diffusion coefficients in a one-dimensional PEMFC model for transport in the principal direction of current flow. Experimental polarization curves are fitted for conventional and low current density PEMFCs. Use of adaptive mesh refinement is demonstrated to increase the computational efficiency.
Application of parameter estimation to highly unstable aircraft
NASA Technical Reports Server (NTRS)
Maine, R. E.; Murray, J. E.
1986-01-01
The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
CosmoSIS: A System for MC Parameter Estimation
Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Identification of vehicle parameters and estimation of vertical forces
NASA Astrophysics Data System (ADS)
Imine, H.; Fridman, L.; Madani, T.
2015-12-01
The aim of the present work is to estimate the vertical forces and to identify the unknown dynamic parameters of a vehicle using the sliding mode observers approach. The estimation of vertical forces needs a good knowledge of dynamic parameters such as damping coefficient, spring stiffness and unsprung masses, etc. In this paper, suspension stiffness and unsprung masses have been identified by the Least Square Method. Real-time tests have been carried out on an instrumented static vehicle, excited vertically by hydraulic jacks. The vehicle is equipped with different sensors in order to measure its dynamics. The measurements coming from these sensors have been considered as unknown inputs of the system. However, only the roll angle and the suspension deflection measurements have been used in order to perform the observer. Experimental results are presented and discussed to show the quality of the proposed approach.
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Probabilistic estimation of the constitutive parameters of polymers
NASA Astrophysics Data System (ADS)
Foley, J. R.; Jordan, J. L.; Siviour, C. R.
2012-08-01
The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.
Estimation of Geodetic and Geodynamical Parameters with VieVS
NASA Technical Reports Server (NTRS)
Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald
2010-01-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
On Using Exponential Parameter Estimators with an Adaptive Controller
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Estimation of multidimensional precipitation parameters by areal estimates of oceanic rainfall
NASA Technical Reports Server (NTRS)
Valdes, J. B.; Nakamoto, S.; Shen, S. S. P.; North, G. R.
1990-01-01
The parameters of the multidimensional precipitation model proposed by Waymire et al. (1984) are estimated using the areal-averaged radar measurements of precipitation of the Global Atlantic Tropical Experiment (GATE) data set. The procedure followed was the fitting of the first- and second-order moments at different aggregation scales by nonlinear regression techniques. The numerical estimates of the parameters using different subsets of GATE information were reasonably stable, i.e., they were not affected by changes of the area-averaging size, temporal length of the records, and percentage of areal coverage of rainfall. This suggests that the estimation procedure is relatively robust and suitable to estimate the parameters of the multidimensional model in areas of sparse density of rain gages. The use of the space-time spectrum of rainfall to help in the determination of sampling errors due to intermittent visits of future space-borne low-altitude sensors of precipitation is also discussed.
Estimation of Two-Parameter Logistic Item Response Curves.
1983-12-01
the one- parameter logistic ( Rasch ) model by Rigdon and Tsutakawa (1983). Here we will consider one of these estimators, namely MLF, where the...response model for n dichotomously scored items. Psychometrika, 1970, 35, 179-197. 4. Dempster, A.P. Laird, N.M. & Rubin, D.B. Maximum likelihood...reverse aide it neceesary and Identify by block number) Item responses, logistic model , EM algorithm, maximum likelihood 20. ABSTRACT (Continue an
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
An Integrated Tool for Estimation of Material Model Parameters (PREPRINT)
2010-04-01
irrevocable worldwide license to use, modify, reproduce, release, perform, display, or disclose the work by or on behalf of the U.S. Government. 14 ... vf , and wf. The filtered v profiles are shown in Figure 4. For the plastic deformation data we found that the filtering could not correct the...wf near the top right corner. We need to use the vf data for our parameter estimation. Since the geometry and loading are symmetric in the FEM
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Estimation of discontinuous coefficients and boundary parameters for hyperbolic systems
NASA Technical Reports Server (NTRS)
Lamm, P. K.; Murphy, K. A.
1986-01-01
The problem of estimating discontinuous coefficients, including locations of discontinuities, that occur in second order hyperbolic systems typical of those arising in I-D surface seismic problems is discussed. In addition, the problem of identifying unknown parameters that appear in boundary conditions for the system is treated. A spline-based approximation theory is presented, together with related convergence findings and representative numerical examples.
Hybrid fault diagnosis of nonlinear systems using neural parameter estimators.
Sobhani-Tehrani, E; Talebi, H A; Khorasani, K
2014-02-01
This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems taking advantage of both the system's mathematical model and the adaptive nonlinear approximation capability of computational intelligence techniques. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPEs) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FPs) that are indicators of faults in the system. Two NPE structures, series-parallel and parallel, are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. In contrast, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the two NPEs that originally assumes full state measurements for systems that have only partial state measurements. The proposed FTO is a neural state estimator that can estimate unmeasured states even in the presence of faults. The estimated and the measured states then comprise the inputs to the two proposed FDII schemes. Simulation results for FDII of reaction wheels of a three-axis stabilized satellite in the presence of disturbances and noise demonstrate the effectiveness of the proposed FDII solutions under partial state measurements. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Hydraulic parameters estimation from well logging resistivity and geoelectrical measurements
NASA Astrophysics Data System (ADS)
Perdomo, S.; Ainchil, J. E.; Kruse, E.
2014-06-01
In this paper, a methodology is suggested for deriving hydraulic parameters, such as hydraulic conductivity or transmissivity combining classical hydrogeological data with geophysical measurements. Estimates values of transmissivity and conductivity, with this approach, can reduce uncertainties in numerical model calibration and improve data coverage, reducing time and cost of a hydrogeological investigation at a regional scale. The conventional estimation of hydrogeological parameters needs to be done by analyzing wells data or laboratory measurements. Furthermore, to make a regional survey many wells should be considered, and the location of each one plays an important role in the interpretation stage. For this reason, the use of geoelectrical methods arises as an effective complementary technique, especially in developing countries where it is necessary to optimize resources. By combining hydraulic parameters from pumping tests and electrical resistivity from well logging profiles, it was possible to adjust three empirical laws in a semi-confined alluvial aquifer in the northeast of the province of Buenos Aires (Argentina). These relations were also tested to be used with surficial geoelectrical data. The hydraulic conductivity and transmissivity estimated in porous material were according to expected values for the region (20 m/day; 457 m2/day), and are very consistent with previous results from other authors (25 m/day and 500 m2/day). The methodology described could be used with similar data sets and applied to other areas with similar hydrogeological conditions.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Rapid estimation of high-parameter auditory-filter shapes.
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M
2014-10-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials.
Recursive principal components analysis.
Voegtlin, Thomas
2005-10-01
A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.
Orientational order parameter estimated from molecular polarizabilities - an optical study
NASA Astrophysics Data System (ADS)
Lalitha Kumari, J.; Datta Prasad, P. V.; Madhavi Latha, D.; Pisipati, V. G. K. M.
2012-01-01
An optical study of N-(p-n-alkyloxybenzylidene)-p-n-butyloxyanilines, nO.O4 compounds with the alkoxy chain number n = 1, 3, 6, 7, and 10 has been carried out by measuring the refractive indices using modified spectrometer and direct measurement of birefringence employing the Newton's rings method. Further, the molecular polarizability anisotropies are evaluated using Lippincott δ-function model, the molecular vibration method, Haller's extrapolation method, and scaling factor method. The molecular polarizabilities α e and α 0 are calculated using Vuk's isotropic and Neugebauer anisotropic local field models. The order parameter S is estimated by employing the molecular polarizability values determined from experimental refractive indices and density data and the polarizability anisotropy values. Further, the order parameter S is also obtained directly from the birefringence data. A comparison has been carried out among the order parameter obtained from different ways and the results are compared with the body of the data available in the literature.
Estimates of Running Ground Reaction Force Parameters from Motion Analysis.
Pavei, Gaspare; Seminati, Elena; Storniolo, Jorge L L; Peyré-Tartaruga, Leonardo A
2017-02-01
We compared running mechanics parameters determined from ground reaction force (GRF) measurements with estimated forces obtained from double differentiation of kinematic (K) data from motion analysis in a broad spectrum of running speeds (1.94-5.56 m⋅s(-1)). Data were collected through a force-instrumented treadmill and compared at different sampling frequencies (900 and 300 Hz for GRF, 300 and 100 Hz for K). Vertical force peak, shape, and impulse were similar between K methods and GRF. Contact time, flight time, and vertical stiffness (kvert) obtained from K showed the same trend as GRF with differences < 5%, whereas leg stiffness (kleg) was not correctly computed by kinematics. The results revealed that the main vertical GRF parameters can be computed by the double differentiation of the body center of mass properly calculated by motion analysis. The present model provides an alternative accessible method for determining temporal and kinetic parameters of running without an instrumented treadmill.
Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling
NASA Astrophysics Data System (ADS)
Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-01
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Effects of parameter estimation on maximum-likelihood bootstrap analysis.
Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack
2010-08-01
Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data. Copyright 2010 Elsevier Inc. All rights reserved.
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; ...
2015-12-23
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.
2015-12-23
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could be used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.
Automatic estimation of elasticity parameters in breast tissue
NASA Astrophysics Data System (ADS)
Skerl, Katrin; Cochran, Sandy; Evans, Andrew
2014-03-01
Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Estimation of longitudinal aircraft characteristics using parameter identification techniques
NASA Technical Reports Server (NTRS)
Wingrove, R. C.
1978-01-01
This study compares the results from different parameter identification methods used to determine longitudinal aircraft characteristics from flight data. In general, these comparisons have found that the estimated short-period dynamics (natural frequency, damping, transfer functions) are only weakly affected by the type of identification method, however, the estimated aerodynamic coefficients may be strongly affected by the type of identification method. The estimated values for aerodynamic coefficients were found to depend upon the type of math model and type of test data used with each of the identification methods. The use of fairly complete math models and the use of long data lengths, combining both steady and nonsteady motion, are shown to provide aerodynamic coefficient values that compare favorably with the results from other testing methods such as steady-state flight and full-scale wind-tunnel experiments.
Framework for estimating tumour parameters using thermal imaging.
Umadevi, V; Raghavan, S V; Jaipurkar, Sandeep
2011-11-01
Non-invasive and non-ionizing medical imaging techniques are safe as these can be repeatedly used on as individual and are applicable across all age groups. Breast thermography is a non-invasive and non-ionizing medical imaging that can be potentially used in breast cancer detection and diagnosis. In this study, we used breast thermography to estimate the tumour contour from the breast skin surface temperature. We proposed a framework called infrared thermography based image construction (ITBIC) to estimate tumour parameters such as size and depth from cancerous breast skin surface temperature data. Markov Chain Monte Carlo method was used to enhance the accuracy of estimation in order to reflect clearly realistic situation. We validated our method experimentally using Watermelon and Agar models. For the Watermelon experiment error in estimation of size and depth parameters was 1.5 and 3.8 per cent respectively. For the Agar model it was 0 and 8 per cent respectively. Further, thermal breast screening was done on female volunteers and compared it with the magnetic resonance imaging. The results were positive and encouraging. ITBIC is computationally fast thermal imaging system and is perhaps affordable. Such a system will be useful for doctors or radiologists for breast cancer diagnosis.
Framework for estimating tumour parameters using thermal imaging
Umadevi, V.; Raghavan, S.V.; Jaipurkar, Sandeep
2011-01-01
Background & objectives: Non-invasive and non-ionizing medical imaging techniques are safe as these can be repeatedly used on as individual and are applicable across all age groups. Breast thermography is a non-invasive and non-ionizing medical imaging that can be potentially used in breast cancer detection and diagnosis. In this study, we used breast thermography to estimate the tumour contour from the breast skin surface temperature. Methods: We proposed a framework called infrared thermography based image construction (ITBIC) to estimate tumour parameters such as size and depth from cancerous breast skin surface temperature data. Markov Chain Monte Carlo method was used to enhance the accuracy of estimation in order to reflect clearly realistic situation. Results: We validated our method experimentally using Watermelon and Agar models. For the Watermelon experiment error in estimation of size and depth parameters was 1.5 and 3.8 per cent respectively. For the Agar model it was 0 and 8 per cent respectively. Further, thermal breast screening was done on female volunteers and compared it with the magnetic resonance imaging. The results were positive and encouraging. Interpretation & conclusions: ITBIC is computationally fast thermal imaging system and is perhaps affordable. Such a system will be useful for doctors or radiologists for breast cancer diagnosis. PMID:22199114
Estimating demographic parameters using hidden process dynamic models.
Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger
2012-12-01
Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. Copyright © 2012 Elsevier Inc. All rights reserved.
Recursive Objects--An Object Oriented Presentation of Recursion
ERIC Educational Resources Information Center
Sher, David B.
2004-01-01
Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…
Recursive Objects--An Object Oriented Presentation of Recursion
ERIC Educational Resources Information Center
Sher, David B.
2004-01-01
Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark
Basin structure of optimization based state and parameter estimation.
Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D I; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan
2015-05-01
Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).
Basin structure of optimization based state and parameter estimation
NASA Astrophysics Data System (ADS)
Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D. I.; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan
2015-05-01
Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques
Estimation of genetic parameters for reproductive traits in Shall sheep.
Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein
2013-06-01
The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (P<0.05). Genetic parameters were estimated using restricted maximum likelihood procedure, under repeatability animal models. Direct heritability estimates were 0.02, 0.01, 0.47, 0.40, 0.15, and 0.03 for LSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep.
Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker
2011-01-01
Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Recursive Feature Extraction in Graphs
2014-08-14
ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.
Plasma parameter estimation from multistatic, multibeam incoherent scatter data
NASA Astrophysics Data System (ADS)
Virtanen, I. I.; McKay-Bukowski, D.; Vierinen, J.; Aikio, A.; Fallows, R.; Roininen, L.
2014-12-01
Multistatic incoherent scatter radars are superior to monostatic facilities in the sense that multistatic systems can measure plasma parameters from multiple directions in volumes limited by beam dimensions and measurement range resolution. We propose a new incoherent scatter analysis technique that uses data from all receiver beams of a multistatic, multibeam radar system and produces, in addition to the plasma parameters typically measured with monostatic radars, estimates of ion velocity vectors and ion temperature anisotropies. Because the total scattered energy collected with remote receivers of a modern multistatic, multibeam radar system may even exceed the energy collected with the core transmit-and-receive site, the remote data improve the accuracy of all plasma parameter estimates, including those that could be measured with the core site alone. We apply the new multistatic analysis method for data measured by the tristatic European Incoherent Scatter VHF radar and the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) multibeam receiver and show that a significant improvement in accuracy is obtained by adding KAIRA data in the multistatic analysis. We also demonstrate the development of a pronounced ion temperature anisotropy during high-speed ionospheric plasma flows in substorm conditions.
Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.
Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors
Ojeda, Manoela; Ding, Dan
2014-01-01
Due to lower limb paralysis, individuals with spinal cord injury (SCI) rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE) and mean absolute percentage of error (MAPE) were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion. PMID:25105133
ERIC Educational Resources Information Center
Gao, Furong; Chen, Lisue
2005-01-01
Through a large-scale simulation study, this article compares item parameter estimates obtained by the marginal maximum likelihood estimation (MMLE) and marginal Bayes modal estimation (MBME) procedures in the 3-parameter logistic model. The impact of different prior specifications on the MBME estimates is also investigated using carefully…
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control
NASA Astrophysics Data System (ADS)
Eshak, Peter B.
Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to
Error estimation and adaptivity for transport problems with uncertain parameters
NASA Astrophysics Data System (ADS)
Sahni, Onkar; Li, Jason; Oberai, Assad
2016-11-01
Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).
Earth-moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1975-01-01
A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.
Acoustical estimation of parameters of porous road pavement
NASA Astrophysics Data System (ADS)
Valyaev, V. Yu.; Shanin, A. V.
2012-11-01
In the simplest case, porous road pavement of a known thickness is described by such parameters as porosity, tortuosity, and flow resistance. The problem of estimating these parameters is investigated in this paper. An acoustic signal reflected by the pavement is used for this. It is shown that this problem can be solved by an experiment conducted in the time domain (i.e., the pulse response of the media is recorded). The incident sound wave is thrown at a grazing angle to the surface between the pavement and the air to improve penetration into the porous medium. The procedure of computing of the pulse response using the Morse-Ingard model is described in detail.
Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation
NASA Astrophysics Data System (ADS)
Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan
2016-08-01
Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).
Transport parameter estimation from lymph measurements and the Patlak equation.
Watson, P D; Wolf, M B
1992-01-01
Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.
Estimating Regression Parameters in an Extended Proportional Odds Model
Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping
2012-01-01
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583
Confidence Region Estimation for Groundwater Parameter Identification Problems
NASA Astrophysics Data System (ADS)
Vugrin, K. W.; Swiler, L. P.; Roberts, R. M.
2007-12-01
This presentation focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F--test method, and a Log--Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well--test results. The confidence regions for each case study are analyzed and compared. Each of the three methods produce similar and reasonable confidence regions for the case study using synthetic data. The linear approximation method grossly overestimates the confidence region for the first groundwater parameter identification case study. The F--test and Log--Likelihood methods result in similar reasonable regions for this test case. For the second groundwater parameter identification case study, the linear approximation method produces a confidence region of reasonable size. In this test case, the F--test and Log--Likelihood methods generate disjoint confidence regions of reasonable size. The differing results, capabilities, and drawbacks of all three methods are discussed. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.
Evolution-Free Hamiltonian Parameter Estimation through Zeeman Markers
NASA Astrophysics Data System (ADS)
Burgarth, Daniel; Ajoy, Ashok
2017-07-01
We provide a protocol for Hamiltonian parameter estimation which relies only on the Zeeman effect. No time-dependent quantities need to be measured; it fully suffices to observe spectral shifts induced by fields applied to local "markers." We demonstrate the idea with a simple tight-binding Hamiltonian and numerically show stability with respect to Gaussian noise on the spectral measurements. Then we generalize the result to show applicability to a wide range of systems, including quantum spin chains, networks of qubits, and coupled harmonic oscillators, and suggest potential experimental implementations.
Evolution-Free Hamiltonian Parameter Estimation through Zeeman Markers.
Burgarth, Daniel; Ajoy, Ashok
2017-07-21
We provide a protocol for Hamiltonian parameter estimation which relies only on the Zeeman effect. No time-dependent quantities need to be measured; it fully suffices to observe spectral shifts induced by fields applied to local "markers." We demonstrate the idea with a simple tight-binding Hamiltonian and numerically show stability with respect to Gaussian noise on the spectral measurements. Then we generalize the result to show applicability to a wide range of systems, including quantum spin chains, networks of qubits, and coupled harmonic oscillators, and suggest potential experimental implementations.
Estimation of Modal Parameters Using a Wavelet-Based Approach
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Haley, Sidney M.
1997-01-01
Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.
Error estimates and specification parameters for functional renormalization
Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Parameter estimation using NOON states over a relativistic quantum channel
NASA Astrophysics Data System (ADS)
Hosler, Dominic; Kok, Pieter
2013-11-01
We study the effect of the acceleration of the observer on a parameter estimation protocol using NOON states. An inertial observer, Alice, prepares a NOON state in Unruh modes of the quantum field, and sends it to an accelerated observer, Rob. We calculate the quantum Fisher information of the state received by Rob. We find the counterintuitive result that the single-rail encoding outperforms the dual rail. The NOON states have an optimal N for the maximum information extractable by Rob, given his acceleration. This optimal N decreases with increasing acceleration.
Hybrid optimization method with general switching strategy for parameter estimation.
Balsa-Canto, Eva; Peifer, Martin; Banga, Julio R; Timmer, Jens; Fleck, Christian
2008-03-24
Modeling and simulation of cellular signaling and metabolic pathways as networks of biochemical reactions yields sets of non-linear ordinary differential equations. These models usually depend on several parameters and initial conditions. If these parameters are unknown, results from simulation studies can be misleading. Such a scenario can be avoided by fitting the model to experimental data before analyzing the system. This involves parameter estimation which is usually performed by minimizing a cost function which quantifies the difference between model predictions and measurements. Mathematically, this is formulated as a non-linear optimization problem which often results to be multi-modal (non-convex), rendering local optimization methods detrimental. In this work we propose a new hybrid global method, based on the combination of an evolutionary search strategy with a local multiple-shooting approach, which offers a reliable and efficient alternative for the solution of large scale parameter estimation problems. The presented new hybrid strategy offers two main advantages over previous approaches: First, it is equipped with a switching strategy which allows the systematic determination of the transition from the local to global search. This avoids computationally expensive tests in advance. Second, using multiple-shooting as the local search procedure reduces the multi-modality of the non-linear optimization problem significantly. Because multiple-shooting avoids possible spurious solutions in the vicinity of the global optimum it often outperforms the frequently used initial value approach (single-shooting). Thereby, the use of multiple-shooting yields an enhanced robustness of the hybrid approach.
NASA Astrophysics Data System (ADS)
Yong, Kilyuk; Jo, Sujang; Bang, Hyochoong
This paper presents a modified Rodrigues parameter (MRP)-based nonlinear observer design to estimate bias, scale factor and misalignment of gyroscope measurements. A Lyapunov stability analysis is carried out for the nonlinear observer. Simulation is performed and results are presented illustrating the performance of the proposed nonlinear observer under the condition of persistent excitation maneuver. In addition, a comparison between the nonlinear observer and alignment Kalman filter (AKF) is made to highlight favorable features of the nonlinear observer.
NASA Astrophysics Data System (ADS)
Ollongren, Alexander
2011-02-01
In a sequence of papers on the topic of message construction for interstellar communication by means of a cosmic language, the present author has discussed various significant requirements such a lingua should satisfy. The author's Lingua Cosmica is a (meta) system for annotating contents of possibly large-scale messages for ETI. LINCOS, based on formal constructive logic, was primarily designed for dealing with logic contents of messages but is also applicable for denoting structural properties of more general abstractions embedded in such messages. The present paper explains ways and means for achieving this for a special case: recursive entities. As usual two stages are involved: first the domain of discourse is enriched with suitable representations of the entities concerned, after which properties over them can be dealt with within the system itself. As a representative example the case of Russian dolls (Matrjoshka's) is discussed in some detail and relations with linguistic structures in natural languages are briefly exploited.
Reduced order parameter estimation using quasilinearization and quadratic programming
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Putti, Mario; Yeh, William W.-G.
2012-06-01
The ability of a particular model to accurately predict how a system responds to forcing is predicated on various model parameters that must be appropriately identified. There are many algorithms whose purpose is to solve this inverse problem, which is often computationally intensive. In this study, we propose a new algorithm that significantly reduces the computational burden associated with parameter identification. The algorithm is an extension of the quasilinearization approach where the governing system of differential equations is linearized with respect to the parameters. The resulting inverse problem therefore becomes a linear regression or quadratic programming problem (QP) for minimizing the sum of squared residuals; the solution becomes an update on the parameter set. This process of linearization and regression is repeated until convergence takes place. This algorithm has not received much attention, as the QPs can become quite large, often infeasible for real-world systems. To alleviate this drawback, proper orthogonal decomposition is applied to reduce the size of the linearized model, thereby reducing the computational burden of solving each QP. In fact, this study shows that the snapshots need only be calculated once at the very beginning of the algorithm, after which no further calculations of the reduced-model subspace are required. The proposed algorithm therefore only requires one linearized full-model run per parameter at the first iteration followed by a series of reduced-order QPs. The method is applied to a groundwater model with about 30,000 computation nodes where as many as 15 zones of hydraulic conductivity are estimated.
A robust methodology for modal parameters estimation applied to SHM
NASA Astrophysics Data System (ADS)
Cardoso, Rharã; Cury, Alexandre; Barbosa, Flávio
2017-10-01
The subject of structural health monitoring is drawing more and more attention over the last years. Many vibration-based techniques aiming at detecting small structural changes or even damage have been developed or enhanced through successive researches. Lately, several studies have focused on the use of raw dynamic data to assess information about structural condition. Despite this trend and much skepticism, many methods still rely on the use of modal parameters as fundamental data for damage detection. Therefore, it is of utmost importance that modal identification procedures are performed with a sufficient level of precision and automation. To fulfill these requirements, this paper presents a novel automated time-domain methodology to identify modal parameters based on a two-step clustering analysis. The first step consists in clustering modes estimates from parametric models of different orders, usually presented in stabilization diagrams. In an automated manner, the first clustering analysis indicates which estimates correspond to physical modes. To circumvent the detection of spurious modes or the loss of physical ones, a second clustering step is then performed. The second step consists in the data mining of information gathered from the first step. To attest the robustness and efficiency of the proposed methodology, numerically generated signals as well as experimental data obtained from a simply supported beam tested in laboratory and from a railway bridge are utilized. The results appeared to be more robust and accurate comparing to those obtained from methods based on one-step clustering analysis.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Parameter estimation in slow fast models: a palaeoclimate application
NASA Astrophysics Data System (ADS)
Almeida, Carlos; Crucifix, Michel
2013-04-01
Ice ages have paced climate for about 3 million years. They are characterised by a succession of glacial and interglacial eras, the latest interglacial era having begun approximately 11,000 years ago. There is debate about the timing of the next glacial era, although this is only one of many possible questions about the dynamics of ice ages. Our focus is on how to express these questions in a statistically coherent framework. Even leaving aside chronological uncertainties, the problem is challenging, and we show why. The ice volume oscillations have been modelled by using a non-linear stochastic differential equation with a drift function involving astronomical forcing and a Wiener process as a noise term. Additionally, the observation measure through a proxy is considered as contaminated by another independent noise. The deterministic version of this model has potentially complex dynamics, which can be connected to the theory of strange non-chaotic attractors. The challenge we are facing with the calibration of this model is partly related to the complexity of its dynamics. For estimating the parameters of the model based on observations, two strategies are considered, one by extending the space of the unobserved variables for including the parameters, and the other by approximating the integral over the unobserved variables in order to obtain the marginal likelihood. Both for the estimation in the extended model and for the numerical integration, an unscented Kalman filter and a particle filter are used and compared.
Periodic orbits of hybrid systems and parameter estimation via AD.
Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard
2004-07-01
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance
Parameter Estimation for a Hybrid Adaptive Flight Controller
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje
2009-01-01
This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.
Estimates of genetic parameters for growth traits in Kermani sheep.
Bahreini Behzadi, M R; Shahroudi, F E; Van Vleck, L D
2007-10-01
Birth weight (BW), weaning weight (WW), 6-month weight (W6), 9-month weight (W9) and yearling weight (YW) of Kermani lambs were used to estimate genetic parameters. The data were collected from Shahrbabak Sheep Breeding Research Station in Iran during the period of 1993-1998. The fixed effects in the model were lambing year, sex, type of birth and age of dam. Number of days between birth date and the date of obtaining measurement of each record was used as a covariate. Estimates of (co)variance components and genetic parameters were obtained by restricted maximum likelihood, using single and two-trait animal models. Based on the most appropriate fitted model, direct and maternal heritabilities of BW, WW, W6, W9 and YW were estimated to be 0.10 +/- 0.06 and 0.27 +/- 0.04, 0.22 +/- 0.09 and 0.19 +/- 0.05, 0.09 +/- 0.06 and 0.25 +/- 0.04, 0.13 +/- 0.08 and 0.18 +/- 0.05, and 0.14 +/- 0.08 and 0.14 +/- 0.06 respectively. Direct and maternal genetic correlations between the lamb weights varied between 0.66 and 0.99, and 0.11 and 0.99. The results showed that the maternal influence on lamb weights decreased with age at measurement. Ignoring maternal effects in the model caused overestimation of direct heritability. Maternal effects are significant sources of variation for growth traits and ignoring maternal effects in the model would cause inaccurate genetic evaluation of lambs.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Automatic parameter estimation for atmospheric turbulence mitigation techniques
NASA Astrophysics Data System (ADS)
Kozacik, Stephen; Paolini, Aaron; Kelmelis, Eric
2015-05-01
Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm's underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried's parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.
Parameter Estimation of Nonlinear Systems by Dynamic Cuckoo Search.
Liao, Qixiang; Zhou, Shudao; Shi, Hanqing; Shi, Weilai
2017-04-01
In order to address with the problem of the traditional or improved cuckoo search (CS) algorithm, we propose a dynamic adaptive cuckoo search with crossover operator (DACS-CO) algorithm. Normally, the parameters of the CS algorithm are kept constant or adapted by empirical equation that may result in decreasing the efficiency of the algorithm. In order to solve the problem, a feedback control scheme of algorithm parameters is adopted in cuckoo search; Rechenberg's 1/5 criterion, combined with a learning strategy, is used to evaluate the evolution process. In addition, there are no information exchanges between individuals for cuckoo search algorithm. To promote the search progress and overcome premature convergence, the multiple-point random crossover operator is merged into the CS algorithm to exchange information between individuals and improve the diversification and intensification of the population. The performance of the proposed hybrid algorithm is investigated through different nonlinear systems, with the numerical results demonstrating that the method can estimate parameters accurately and efficiently. Finally, we compare the results with the standard CS algorithm, orthogonal learning cuckoo search algorithm (OLCS), an adaptive and simulated annealing operation with the cuckoo search algorithm (ACS-SA), a genetic algorithm (GA), a particle swarm optimization algorithm (PSO), and a genetic simulated annealing algorithm (GA-SA). Our simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples.
Estimation of the poroelastic parameters of cortical bone.
Smit, Theo H; Huyghe, Jacques M; Cowin, Stephen C
2002-06-01
Cortical bone has two systems of interconnected channels. The largest of these is the vascular porosity consisting of Haversian and Volkmann's canals, with a diameter of about 50 microm, which contains a.o. blood vessels and nerves. The smaller is the system consisting of the canaliculi and lacunae: the canaliculi are at the submicron level and house the protrusions of the osteocytes. When bone is differentially loaded, fluids within the solid matrix sustain a pressure gradient that drives a flow. It is generally assumed that the flow of extracellular fluid around osteocytes plays an important role not only in the nutrition of these cells, but also in the bone's mechanosensory system. The interaction between the deformation of the bone matrix and the flow of fluid can be modelled using Biot's theory of poroelasticity. However, due to the inhomogeneity of the bone matrix and the scale of the porosities, it is not possible to experimentally determine all the parameters that are needed for numerical implementation. The purpose of this paper is to derive these parameters using composite modelling and experimental data from literature. A full set of constants is estimated for a linear isotropic description of cortical bone as a two-level porous medium. Bone, however, has a wide variety of mechanical and structural properties; with the theoretical relationships described in this note, poroelastic parameters can be derived for other bone types using their specific experimental data sets.
Learn-as-you-go acceleration of cosmological parameter estimates
Aslanyan, Grigor; Easther, Richard; Price, Layne C. E-mail: r.easther@auckland.ac.nz
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Estimation of Wheat Agronomic Parameters using New Spectral Indices
Jin, Xiu-liang; Diao, Wan-ying; Xiao, Chun-hua; Wang, Fang-yong; Chen, Bing; Wang, Ke-ru; Li, Shao-kun
2013-01-01
Crop agronomic parameters (leaf area index (LAI), nitrogen (N) uptake, total chlorophyll (Chl) content ) are very important for the prediction of crop growth. The objective of this experiment was to investigate whether the wheat LAI, N uptake, and total Chl content could be accurately predicted using spectral indices collected at different stages of wheat growth. Firstly, the product of the optimized soil-adjusted vegetation index and wheat biomass dry weight (OSAVI×BDW) were used to estimate LAI, N uptake, and total Chl content; secondly, BDW was replaced by spectral indices to establish new spectral indices (OSAVI×OSAVI, OSAVI×SIPI, OSAVI×CIred edge, OSAVI×CIgreen mode and OSAVI×EVI2); finally, we used the new spectral indices for estimating LAI, N uptake, and total Chl content. The results showed that the new spectral indices could be used to accurately estimate LAI, N uptake, and total Chl content. The highest R2 and the lowest RMSEs were 0.711 and 0.78 (OSAVI×EVI2), 0.785 and 3.98 g/m2 (OSAVI×CIred edge) and 0.846 and 0.65 g/m2 (OSAVI×CIred edge) for LAI, nitrogen uptake and total Chl content, respectively. The new spectral indices performed better than the OSAVI alone, and the problems of a lack of sensitivity at earlier growth stages and saturation at later growth stages, which are typically associated with the OSAVI, were improved. The overall results indicated that this new spectral indices provided the best approximation for the estimation of agronomic indices for all growth stages of wheat. PMID:24023639
US-Based Drug Cost Parameter Estimation for Economic Evaluations.
Levy, Joseph F; Meek, Patrick D; Rosenberg, Marjorie A
2015-07-01
In the United States, more than 10% of national health expenditures are for prescription drugs. Assessing drug costs in US economic evaluation studies is not consistent, as the true acquisition cost of a drug is not known by decision modelers. Current US practice focuses on identifying one reasonable drug cost and imposing some distributional assumption to assess uncertainty. We propose a set of Rules based on current pharmacy practice that account for the heterogeneity of drug product costs. The set of products derived from our Rules, and their associated costs, form an empirical distribution that can be used for more realistic sensitivity analyses and create transparency in drug cost parameter computation. The Rules specify an algorithmic process to select clinically equivalent drug products that reduce pill burden, use an appropriate package size, and assume uniform weighting of substitutable products. Three diverse examples show derived empirical distributions and are compared with previously reported cost estimates. The shapes of the empirical distributions among the 3 drugs differ dramatically, including multiple modes and different variation. Previously published estimates differed from the means of the empirical distributions. Published ranges for sensitivity analyses did not cover the ranges of the empirical distributions. In one example using lisinopril, the empirical mean cost of substitutable products was $444 (range = $23-$953) as compared with a published estimate of $305 (range = $51-$523). Our Rules create a simple and transparent approach to creating cost estimates of drug products and assessing their variability. The approach is easily modified to include a subset of, or different weighting for, substitutable products. The derived empirical distribution is easily incorporated into 1-way or probabilistic sensitivity analyses. © The Author(s) 2014.
Excitations for Rapidly Estimating Flight-Control Parameters
NASA Technical Reports Server (NTRS)
Moes, Tim; Smith, Mark; Morelli, Gene
2006-01-01
A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
NASA Astrophysics Data System (ADS)
El Gharamti, Mohamad; Valstar, Johan; Hoteit, Ibrahim
2014-05-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Our results suggest that the proposed scheme allows a reduction of around 80% of the ensemble size as compared to the standard EnKF scheme.
NASA Astrophysics Data System (ADS)
Gharamti, M. E.; Valstar, J.; Hoteit, I.
2014-09-01
Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system’s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme.
NASA Astrophysics Data System (ADS)
Coskun, Orhan
For ≥10-Gbit/s bit rates that are transmitted over ≥100 km, it is essential that chromatic The traditional method of sending a training signal to identify a channel, followed by data, may be viewed as a simple code for the unknown channel. Results in blind sequence detection suggest that performance similar to this traditional approach can be obtained without training. However, for short packets and/or time-recursive algorithms, significant error floors exist due to the existence of sequences that are indistinguishable without knowledge of the channel. In this work, we first reconsider training signal design in light of recent results in blind sequence detection. We design training codes which combine modulation and training. In order to design these codes, we find an expression for the pairwise error probability of the joint maximum likelihood (JML) channel and sequence estimator. This expression motivates a pairwise distance for the JML receiver based on principal angles between the range spaces of data matrices. The general code design problem (generalized sphere packing) is formulated as the clique problem associated with an unweighted, undirected graph. We provide optimal and heuristic algorithms for this clique problem. For short packets, we demonstrate that significant improvements are possible by jointly considering the design of the training, modulation, and receiver processing. As a practical blind data detection example, data reception in a fiber optical channel is investigated. To get the most out of the data detection methods, auxiliary algorithms such as sampling phase adjustment, decision threshold estimation algorithms are suggested. For the parallel implementation of detectors, semiring structure is introduced both for decision feedback equalizer (DFE) and maximum likelihood sequence detection (MLSD). Timing jitter is another parameter that affects the BER performance of the system. A data-aided clock recovery algorithm reduces the jitter of
Gilliom, R.J.; Helsel, D.R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores.
NASA Astrophysics Data System (ADS)
Newling, J.; Bassett, B.; Hlozek, R.; Kunz, M.; Smith, M.; Varughese, M.
2012-04-01
The original formulation of Bayesian estimation applied to multiple species (BEAMS) showed how to use a data set contaminated by points of multiple underlying types to perform unbiased parameter estimation. An example is cosmological parameter estimation from a photometric supernova sample contaminated by unknown Type Ibc and II supernovae. Where other methods require data cuts to increase purity, BEAMS uses all of the data points in conjunction with their probabilities of being each type. Here we extend the BEAMS formalism to allow for correlations between the data and the type probabilities of the objects as can occur in realistic cases. We show with simple simulations that this extension can be crucial, providing a 50 per cent reduction in parameter estimation variance when such correlations do exist. We then go on to perform tests to quantify the importance of the type probabilities, one of which illustrates the effect of biasing the probabilities in various ways. Finally, a general presentation of the selection bias problem is given, and discussed in the context of future photometric supernova surveys and BEAMS, which lead to specific recommendations for future supernova surveys.
Modal parameters estimation in the Z-domain
NASA Astrophysics Data System (ADS)
Fasana, Alessandro
2009-01-01
This paper aims to explain in a clear, plain and detailed way a modal parameter estimation method in the frequency domain, or similarly in the Z-domain, valid for multi degrees-of-freedom systems. The technique is based on the rational fraction polynomials (RFP) representation of the frequency-response function (FRF) of a single input single output (SISO) system but is simply extended to multi input multi output (MIMO) and output only problems. A least-squares approach is adopted to take into account the information of all the FRFs but, when large data sets are used, the solution of the resulting system of algebraic linear equations can be a long and difficult task. A procedure to drastically reduce the problem dimensions is then adopted and fully explained; some practical hints are also given in order to achieve well-conditioned matrices. The method is validated through numerical and experimental examples.
Estimating Phenomenological Parameters in Multi-Assets Markets
NASA Astrophysics Data System (ADS)
Raffaelli, Giacomo; Marsili, Matteo
Financial correlations exhibit a non-trivial dynamic behavior. This is reproduced by a simple phenomenological model of a multi-asset financial market, which takes into account the impact of portfolio investment on price dynamics. This captures the fact that correlations determine the optimal portfolio but are affected by investment based on it. Such a feedback on correlations gives rise to an instability when the volume of investment exceeds a critical value. Close to the critical point the model exhibits dynamical correlations very similar to those observed in real markets. We discuss how the model's parameter can be estimated in real market data with a maximum likelihood principle. This confirms the main conclusion that real markets operate close to a dynamically unstable point.
Cosmological parameter estimation with large scale structure observations
NASA Astrophysics Data System (ADS)
Di Dio, Enea; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien
2014-01-01
We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, Cl(z1,z2), calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard P(k) analysis with the new Cl(z1,z2) method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the P(k) analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, C0(z1,z2).
Enhancing parameter precision of optimal quantum estimation by quantum screening
NASA Astrophysics Data System (ADS)
Jiang, Huang; You-Neng, Guo; Qin, Xie
2016-02-01
We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).
Virtual parameter-estimation experiments in Bioprocess-Engineering education.
Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes
2006-05-01
Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well.
Simplified horn antenna parameter estimation using selective criteria
Ewing, P.D.
1991-01-01
An approximation can be used to avoid the complex mathematics and computation methods typically required for calculating the gain and radiation pattern of electromagnetic horn antenna. Because of the curvature of the antenna wave front, calculations using conventional techniques involve solving the Fresnel integrals and using computer-aided numerical integration. With this model, linear approximations give a reasonable estimate of the gain and radiation pattern using simple trigonometric functions, thereby allowing a hand calculator to replace the computer. Applying selected criteria, the case of the E-plane horn antenna was used to evaluate this technique. Results showed that the gain approximation holds for an antenna flare angle of less than 10{degree} for typical antenna dimensions, and the E field radiation pattern approximation holds until the antenna's phase error approaches 60{degree}, both within typical design parameters. This technique is a useful engineering tool. 4 refs., 11 figs.
Optimal segmentation of pupillometric images for estimating pupil shape parameters.
De Santis, A; Iacoviello, D
2006-12-01
The problem of determining the pupil morphological parameters from pupillometric data is considered. These characteristics are of great interest for non-invasive early diagnosis of the central nervous system response to environmental stimuli of different nature, in subjects suffering some typical diseases such as diabetes, Alzheimer disease, schizophrenia, drug and alcohol addiction. Pupil geometrical features such as diameter, area, centroid coordinates, are estimated by a procedure based on an image segmentation algorithm. It exploits the level set formulation of the variational problem related to the segmentation. A discrete set up of this problem that admits a unique optimal solution is proposed: an arbitrary initial curve is evolved towards the optimal segmentation boundary by a difference equation; therefore no numerical approximation schemes are needed, as required in the equivalent continuum formulation usually adopted in the relevant literature.
Simplified horn antenna parameter estimation using selective criteria
NASA Astrophysics Data System (ADS)
Ewing, P. D.
1991-03-01
An approximation can be used to avoid the complex mathematics and computation methods typically required for calculating the gain and radiation pattern of electromagnetic horn antenna. Because of the curvature of the antenna wave front, calculations using conventional techniques involve solving the Fresnel integrals and using computer-aided numerical integration. With this model, linear approximations give a reasonable estimate of the gain and radiation pattern using simple trigonometric functions, thereby allowing a hand calculator to replace the computer. Applying selected criteria, the case of the E-plane horn antenna was used to evaluate this technique. Results showed that the gain approximation holds for an antenna flare angle of less than 10 degrees for typical antenna dimensions, and the E field radiation pattern approximation holds until the antenna's phase error approaches 60 degrees, both within typical design parameters. This technique is a useful engineering tool.
Recursive Algorithm For Linear Regression
NASA Technical Reports Server (NTRS)
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Recursive adaptive frame integration limited
NASA Astrophysics Data System (ADS)
Rafailov, Michael K.
2006-05-01
Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.
Smoothing of, and Parameter Estimation from, Noisy Biophysical Recordings
Huys, Quentin J. M.; Paninski, Liam
2009-01-01
Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo (“particle filtering”) methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise) are inferred automatically from noisy data via expectation-maximisation. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise. PMID:19424506
Forage quantity estimation from MERIS using band depth parameters
NASA Astrophysics Data System (ADS)
Ullah, Saleem; Yali, Si; Schlerf, Martin
Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Tradeoffs among watershed model calibration targets for parameter estimation
NASA Astrophysics Data System (ADS)
Price, Katie; Purucker, S. Thomas; Kraemer, Stephen R.; Babendreier, Justin E.
2012-10-01
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation fit, while modified Nash-Sutcliffe efficiency (MNS) emphasizes lower flows, and the ratio of the simulated to observed standard deviations (RSD) prioritizes flow variability. We investigated tradeoffs of calibrating streamflow on three standard objective functions (NSE, MNS, and RSD), as well as a multiobjective function aggregating these three targets to simultaneously address a range of flow conditions, for calibration of the Soil and Water Assessment Tool (SWAT) daily streamflow simulations in two watersheds. A suite of objective functions was explored to select a minimally redundant set of metrics addressing a range of flow characteristics. After each pass of 2001 simulations, an iterative informal likelihood procedure was used to subset parameter ranges. The ranges from each best-fit simulation set were used for model validation. Values for optimized parameters vary among calibrations using different objective functions, which underscores the importance of linking modeling objectives to calibration target selection. The simulation set approach yielded validated models of similar quality as seen with a single best-fit parameter set, with the added benefit of uncertainty estimations. Our approach represents a novel compromise between equifinality-based approaches and Pareto optimization. Combining the simulation set approach with the multiobjective function was demonstrated to be a practicable and flexible approach for model calibration, which can be readily modified to suit modeling goals, and is not model or location specific.
Hopf algebras and topological recursion
NASA Astrophysics Data System (ADS)
Esteves, João N.
2015-11-01
We consider a model for topological recursion based on the Hopf algebra of planar binary trees defined by Loday and Ronco (1998 Adv. Math. 139 293-309 We show that extending this Hopf algebra by identifying pairs of nearest neighbor leaves, and thus producing graphs with loops, we obtain the full recursion formula discovered by Eynard and Orantin (2007 Commun. Number Theory Phys. 1 347-452).
2013-07-01
Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations...estimation is extended to include the various surface parameters associated with the bidirectional reflectance distribution function (BRDF... parameters are estimated simultaneously Keywords—estimation; data fusion; BRDF I. INTRODUCTION Wetterer and Jah [1] first demonstrated how brightness