Science.gov

Sample records for recursive parameter estimation

  1. Recursive stochastic subspace identification for structural parameter estimation

    NASA Astrophysics Data System (ADS)

    Chang, C. C.; Li, Z.

    2009-03-01

    Identification of structural parameters under ambient condition is an important research topic for structural health monitoring and damage identification. This problem is especially challenging in practice as these structural parameters could vary with time under severe excitation. Among the techniques developed for this problem, the stochastic subspace identification (SSI) is a popular time-domain method. The SSI can perform parametric identification for systems with multiple outputs which cannot be easily done using other time-domain methods. The SSI uses the orthogonal-triangular decomposition (RQ) and the singular value decomposition (SVD) to process measured data, which makes the algorithm efficient and reliable. The SSI however processes data in one batch hence cannot be used in an on-line fashion. In this paper, a recursive SSI method is proposed for on-line tracking of time-varying modal parameters for a structure under ambient excitation. The Givens rotation technique, which can annihilate the designated matrix elements, is used to update the RQ decomposition. Instead of updating the SVD, the projection approximation subspace tracking technique which uses an unconstrained optimization technique to track the signal subspace is employed. The proposed technique is demonstrated on the Phase I ASCE benchmark structure. Results show that the technique can identify and track the time-varying modal properties of the building under ambient condition.

  2. Recursive estimation of 3D motion and surface structure from local affine flow parameters.

    PubMed

    Calway, Andrew

    2005-04-01

    A recursive structure from motion algorithm based on optical flow measurements taken from an image sequence is described. It provides estimates of surface normals in addition to 3D motion and depth. The measurements are affine motion parameters which approximate the local flow fields associated with near-planar surface patches in the scene. These are integrated over time to give estimates of the 3D parameters using an extended Kalman filter. This also estimates the camera focal length and, so, the 3D estimates are metric. The use of parametric measurements means that the algorithm is computationally less demanding than previous optical flow approaches and the recursive filter builds in a degree of noise robustness. Results of experiments on synthetic and real image sequences demonstrate that the algorithm performs well.

  3. Auto-SOM: recursive parameter estimation for guidance of self-organizing feature maps.

    PubMed

    Haese, K; Goodhill, G J

    2001-03-01

    An important technique for exploratory data analysis is to form a mapping from the high-dimensional data space to a low-dimensional representation space such that neighborhoods are preserved. A popular method for achieving this is Kohonen's self-organizing map (SOM) algorithm. However, in its original form, this requires the user to choose the values of several parameters heuristically to achieve good performance. Here we present the Auto-SOM, an algorithm that estimates the learning parameters during the training of SOMs automatically. The application of Auto-SOM provides the facility to avoid neighborhood violations up to a user-defined degree in either mapping direction. Auto-SOM consists of a Kalman filter implementation of the SOM coupled with a recursive parameter estimation method. The Kalman filter trains the neurons' weights with estimated learning coefficients so as to minimize the variance of the estimation error. The recursive parameter estimation method estimates the width of the neighborhood function by minimizing the prediction error variance of the Kalman filter. In addition, the "topographic function" is incorporated to measure neighborhood violations and prevent the map's converging to configurations with neighborhood violations. It is demonstrated that neighborhoods can be preserved in both mapping directions as desired for dimension-reducing applications. The development of neighborhood-preserving maps and their convergence behavior is demonstrated by three examples accounting for the basic applications of self-organizing feature maps.

  4. Baysian recursive image estimation.

    NASA Technical Reports Server (NTRS)

    Nahi, N. E.; Assefi, T.

    1972-01-01

    Discussion of a statistical procedure for treatment of noise-affected images to recover unaffected images by recursive processing with noise background elimination. The feasibility of the application of a recursive linear Kalman filtering technique to image processing is demonstrated. The procedure is applicable to images which are characterized statistically by mean and correlation functions. A time invariant dynamic model is proposed to provide stationary statistics for the scanner output.

  5. Chandrasekhar-type algorithms for fast recursive estimation in linear systems with constant parameters

    NASA Technical Reports Server (NTRS)

    Choudhury, A. K.; Djalali, M.

    1975-01-01

    In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.

  6. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  7. Online state of charge and model parameters estimation of the LiFePO4 battery in electric vehicles using multiple adaptive forgetting factors recursive least-squares

    NASA Astrophysics Data System (ADS)

    Duong, Van-Huan; Bastawrous, Hany Ayad; Lim, KaiChin; See, Khay Wai; Zhang, Peng; Dou, Shi Xue

    2015-11-01

    This paper deals with the contradiction between simplicity and accuracy of the LiFePO4 battery states estimation in the electric vehicles (EVs) battery management system (BMS). State of charge (SOC) and state of health (SOH) are normally obtained from estimating the open circuit voltage (OCV) and the internal resistance of the equivalent electrical circuit model of the battery, respectively. The difficulties of the parameters estimation arise from their complicated variations and different dynamics which require sophisticated algorithms to simultaneously estimate multiple parameters. This, however, demands heavy computation resources. In this paper, we propose a novel technique which employs a simplified model and multiple adaptive forgetting factors recursive least-squares (MAFF-RLS) estimation to provide capability to accurately capture the real-time variations and the different dynamics of the parameters whilst the simplicity in computation is still retained. The validity of the proposed method is verified through two standard driving cycles, namely Urban Dynamometer Driving Schedule and the New European Driving Cycle. The proposed method yields experimental results that not only estimated the SOC with an absolute error of less than 2.8% but also characterized the battery model parameters accurately.

  8. Recursive Bayesian electromagnetic refractivity estimation from radar sea clutter

    NASA Astrophysics Data System (ADS)

    Vasudevan, Sathyanarayanan; Anderson, Richard H.; Kraut, Shawn; Gerstoft, Peter; Rogers, L. Ted; Krolik, Jeffrey L.

    2007-04-01

    Estimation of the range- and height-dependent index of refraction over the sea surface facilitates prediction of ducted microwave propagation loss. In this paper, refractivity estimation from radar clutter returns is performed using a Markov state space model for microwave propagation. Specifically, the parabolic approximation for numerical solution of the wave equation is used to formulate the refractivity from clutter (RFC) problem within a nonlinear recursive Bayesian state estimation framework. RFC under this nonlinear state space formulation is more efficient than global fitting of refractivity parameters when the total number of range-varying parameters exceeds the number of basis functions required to represent the height-dependent field at a given range. Moreover, the range-recursive nature of the estimator can be easily adapted to situations where the refractivity modeling changes at discrete ranges, such as at a shoreline. A fast range-recursive solution for obtaining range-varying refractivity is achieved by using sequential importance sampling extensions to state estimation techniques, namely, the forward and Viterbi algorithms. Simulation and real data results from radar clutter collected off Wallops Island, Virginia, are presented which demonstrate the ability of this method to produce propagation loss estimates that compare favorably with ground truth refractivity measurements.

  9. Recursive estimation of prior probabilities using the mixture approach

    NASA Technical Reports Server (NTRS)

    Kazakos, D.

    1974-01-01

    The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable.

  10. Recursive bias estimation for high dimensional smoothers

    SciTech Connect

    Hengartner, Nicolas W; Matzner-lober, Eric; Cornillon, Pierre - Andre

    2008-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  11. Vision-based recursive estimation of rotorcraft obstacle locations

    NASA Technical Reports Server (NTRS)

    Leblanc, D. J.; Mcclamroch, N. H.

    1992-01-01

    The authors address vision-based passive ranging during nap-of-the-earth (NOE) rotorcraft flight. They consider the problem of estimating the relative location of identifiable features on nearby obstacles, assuming a sequence of noisy camera images and imperfect measurements of the camera's translation and rotation. An iterated extended Kalman filter is used to provide recursive range estimation. The correspondence problem is simplified by predicting and tracking each feature's image within the Kalman filter framework. Simulation results are presented which show convergent estimates and generally successful feature point tracking. Estimation performance degrades for features near the optical axis and for accelerating motions. Image tracking is also sensitive to angular rate.

  12. A Precision Recursive Estimate for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B.

    1980-01-01

    A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.

  13. Recursive least squares approach to calculate motion parameters for a moving camera

    NASA Astrophysics Data System (ADS)

    Chang, Samuel H.; Fuller, Joseph; Farsaie, Ali; Elkins, Les

    2003-10-01

    The increase in quality and the decrease in price of digital camera equipment have led to growing interest in reconstructing 3-dimensional objects from sequences of 2-dimensional images. The accuracy of the models obtained depends on two sets of parameter estimates. The first is the set of lens parameters - focal length, principal point, and distortion parameters. The second is the set of motion parameters that allows the comparison of a moving camera"s desired location to a theoretical location. In this paper, we address the latter problem, i.e. the estimation of the set of 3-D motion parameters from data obtained with a moving camera. We propose a method that uses Recursive Least Squares for camera motion parameter estimation with observation noise. We accomplish this by calculation of hidden information through camera projection and minimization of the estimation error. We then show how a filter based on the motion parameters estimates may be designed to correct for the errors in the camera motion. The validity of the approach is illustrated by the presentation of experimental results obtained using the methods described in the paper.

  14. Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.

  15. Recursive starlight and bias estimation for high-contrast imaging with an extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.

    2016-01-01

    For imaging faint exoplanets and disks, a coronagraph-equipped observatory needs focal plane wavefront correction to recover high contrast. The most efficient correction methods iteratively estimate the stellar electric field and suppress it with active optics. The estimation requires several images from the science camera per iteration. To maximize the science yield, it is desirable both to have fast wavefront correction and to utilize all the correction images for science target detection. Exoplanets and disks are incoherent with their stars, so a nonlinear estimator is required to estimate both the incoherent intensity and the stellar electric field. Such techniques assume a high level of stability found only on space-based observatories and possibly ground-based telescopes with extreme adaptive optics. In this paper, we implement a nonlinear estimator, the iterated extended Kalman filter (IEKF), to enable fast wavefront correction and a recursive, nearly-optimal estimate of the incoherent light. In Princeton's High Contrast Imaging Laboratory, we demonstrate that the IEKF allows wavefront correction at least as fast as with a Kalman filter and provides the most accurate detection of a faint companion. The nonlinear IEKF formalism allows us to pursue other strategies such as parameter estimation to improve wavefront correction.

  16. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  17. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  18. Recursive phase estimation with a spatial radar carrier

    NASA Astrophysics Data System (ADS)

    Garcia-Marquez, Jorge; Servin Guirado, Manuel; Paez, Gonzalo; Malacara-Hernandez, Daniel

    1999-08-01

    An interferogram can be demodulated to find the wavefront shape if a radial carrier is introduced. The phase determination is made in the space domain, but the low-pass filter characteristics must be properly chosen. One disadvantage of this method is the possible removal of some frequencies from the central lobe, resulting in a misinterpretation of the true phase. Nevertheless isolating the central order by using a recursive method when a radial carrier reference is used is possible. An example of a recovered phase from a simulated interferogram is shown.

  19. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  20. Recursive bias estimation for high dimensional regression smoothers

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  1. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  2. Attitude estimation of earth orbiting satellites by decomposed linear recursive filters

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1975-01-01

    Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.

  3. The recursive maximum likelihood proportion estimator: User's guide and test results

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Implementation of the recursive maximum likelihood proportion estimator is described. A user's guide to programs as they currently exist on the IBM 360/67 at LARS, Purdue is included, and test results on LANDSAT data are described. On Hill County data, the algorithm yields results comparable to the standard maximum likelihood proportion estimator.

  4. Recursive camera-motion estimation with the trifocal tensor.

    PubMed

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen; Or, Siu Hang

    2006-10-01

    In this paper, an innovative extended Kalman filter (EKF) algorithm for pose tracking using the trifocal tensor is proposed. In the EKF, a constant-velocity motion model is used as the dynamic system, and the trifocal-tensor constraint is incorporated into the measurement model. The proposed method has the advantages of those structure- and-motion-based approaches in that the pose sequence can be computed with no prior information on the scene structure. It also has the strengths of those model-based algorithms in which no updating of the three-dimensional (3-D) structure is necessary in the computation. This results in a stable, accurate, and efficient algorithm. Experimental results show that the proposed approach outperformed other existing EKFs that tackle the same problem. An extension to the pose-tracking algorithm has been made to demonstrate the application of the trifocal constraint to fast recursive 3-D structure recovery.

  5. A recursive delayed output-feedback control to stabilize chaotic systems using linear-in-parameter neural networks

    NASA Astrophysics Data System (ADS)

    Yadmellat, Peyman; Nikravesh, S. Kamaleddin Yadavar

    2011-01-01

    In this paper, a recursive delayed output-feedback control strategy is considered for stabilizing unstable periodic orbit of unknown nonlinear chaotic systems. An unknown nonlinearity is directly estimated by a linear-in-parameter neural network which is then used in an observer structure. An on-line modified back propagation algorithm with e-modification is used to update the weights of the network. The globally uniformly ultimately boundedness of overall closed-loop system response is analytically ensured using Razumikhin lemma. To verify the effectiveness of the proposed observer-based controller, a set of simulations is performed on a Rossler system in comparison with several previous methods.

  6. Prior estimation of motion using recursive perceptron with sEMG: a case of wrist angle.

    PubMed

    Kuroda, Yoshihiro; Tanaka, Takeshi; Imura, Masataka; Oshiro, Osamu

    2012-01-01

    Muscle activity is followed by myoelectric potentials. Prior estimation of motion by surface electromyography can be utilized to assist the physically impaired people as well as surgeon. In this paper, we proposed a real-time method for the prior estimation of motion from surface electromyography, especially in the case of wrist angle. The method was based on the recursive processing of multi-layer perceptron, which is trained quickly. A single layer perceptron calculates quasi tensional force of muscles from surface electromyography. A three-layer perceptron calculates the wrist's change in angle. In order to estimate a variety of motions properly, the perceptron was designed to estimate motion in a short time period, e.g. 1ms. Recursive processing enables the method to estimate motion in the target time period, e.g. 50ms. The results of the experiments showed statistical significance for the precedence of estimated angle to the measured one.

  7. Recursive identification and tracking of parameters for linear and nonlinear multivariable systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1975-01-01

    The problem of identifying constant and variable parameters in multi-input, multi-output, linear and nonlinear systems is considered, using the maximum likelihood approach. An iterative algorithm, leading to recursive identification and tracking of the unknown parameters and the noise covariance matrix, is developed. Agile tracking, and accurate and unbiased identified parameters are obtained. Necessary conditions for a globally, asymptotically stable identification process are provided; the conditions proved to be useful and efficient. Among different cases studied, the stability derivatives of an aircraft were identified and some of the results are shown as examples.

  8. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  9. Fault detection in an air-handling unit using residual and recursive parameter identification methods

    SciTech Connect

    Lee, W.Y.; Park, C.; Kelly, G.E.

    1996-11-01

    A scheme for detecting faults in an air-handling unit using residual and parameter identification methods is presented. Faults can be detected by comparing the normal or expected operating condition data with the abnormal, measured data using residuals. Faults can also be detected by examining unmeasurable parameter changes in a model of a controlled system using a system parameter identification technique. In this study, autoregressive moving average with exogenous input (ARMAX) and autoregressive with exogenous input (ARX) models with both single-input/single-output (SISO) and multi-input/single-output (MISO) structures are examined. Model parameters are determined using the Kalman filter recursive identification method. This approach is tested using experimental data from a laboratory`s variable-air-volume (VAV) air-handling unit operated with and without faults.

  10. Recursive Estimation of the Stein Center of SPD Matrices & its Applications.

    PubMed

    Salehian, Hesamoddin; Cheng, Guang; Vemuri, Baba C; Ho, Jeffrey

    2013-12-01

    Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negatively-curved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance - which is the square root of the LogDet divergence - that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable comparisons to other competing methods.

  11. Recursive state estimation for discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong

    2016-07-01

    This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.

  12. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  13. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  14. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  15. Recursive Parameter Identification for Estimating and Displaying Maneuvering Vessel Path

    DTIC Science & Technology

    2003-12-01

    display system (ECDIS) capabilities on naval vessels in an effort to eliminate paper charts and reduce bridge team manpower requirements. Due to unique...Sensor System Interface (NAVSSI) Diagram 2 Although a major improvement over paper charting, NAVSSI also serves as a technology inroad for implementing...identify the dynamic model based on control inputs and observed response. The NAVSSI system is well suited for this function because it already serves

  16. A recursive regularization algorithm for estimating the particle size distribution from multiangle dynamic light scattering measurements

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yang, Kecheng; Li, Wei; Wang, Wanyan; Guo, Wenping; Xia, Min

    2016-07-01

    Conventional regularization methods have been widely used for estimating particle size distribution (PSD) in single-angle dynamic light scattering, but they could not be used directly in multiangle dynamic light scattering (MDLS) measurements for lack of accurate angular weighting coefficients, which greatly affects the PSD determination and none of the regularization methods perform well for both unimodal and multimodal distributions. In this paper, we propose a recursive regularization method-Recursion Nonnegative Tikhonov-Phillips-Twomey (RNNT-PT) algorithm for estimating the weighting coefficients and PSD from MDLS data. This is a self-adaptive algorithm which distinguishes characteristics of PSDs and chooses the optimal inversion method from Nonnegative Tikhonov (NNT) and Nonnegative Phillips-Twomey (NNPT) regularization algorithm efficiently and automatically. In simulations, the proposed algorithm was able to estimate the PSDs more accurately than the classical regularization methods and performed stably against random noise and adaptable to both unimodal and multimodal distributions. Furthermore, we found that the six-angle analysis in the 30-130° range is an optimal angle set for both unimodal and multimodal PSDs.

  17. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights

    PubMed Central

    Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal

    2015-01-01

    Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261

  18. Improved Estimates of Thermodynamic Parameters

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  19. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  20. Target parameter estimation

    NASA Technical Reports Server (NTRS)

    Hocking, W. K.

    1989-01-01

    The objective of any radar experiment is to determine as much as possible about the entities which scatter the radiation. This review discusses many of the various parameters which can be deduced in a radar experiment, and also critically examines the procedures used to deduce them. Methods for determining the mean wind velocity, the RMS fluctuating velocities, turbulence parameters, and the shapes of the scatterers are considered. Complications with these determinations are discussed. It is seen throughout that a detailed understanding of the shape and cause of the scatterers is important in order to make better determinations of these various quantities. Finally, some other parameters, which are less easily acquired, are considered. For example, it is noted that momentum fluxes due to buoyancy waves and turbulence can be determined, and on occasions radars can be used to determine stratospheric diffusion coefficients and even temperature profiles in the atmosphere.

  1. Recursive Bayesian filtering framework for lithium-ion cell state estimation

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Gambhire, Priya; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-02-01

    Robust battery management system is critical for a safe and reliable electric vehicle operation. One of the most important functions of the battery management system is to accurately estimate the battery state using minimal on-board instrumentation. This paper presents a recursive Bayesian filtering framework for on-board battery state estimation by assimilating measurables like cell voltage, current and temperature with physics-based reduced order model (ROM) predictions. The paper proposes an improved Particle filtering algorithm for implementation of the framework, and compares its performance against the unscented Kalman filter. Functionality of the proposed framework is demonstrated for a commercial NCA/C cell state estimation at different operating conditions including constant current discharge at room and low temperatures, hybrid power pulse characterization (HPPC) and urban driving schedule (UDDS) protocols. In addition to accurate voltage prediction, the electrochemical nature of ROM enables drawing of physical insights into the cell behavior. Advantages of using electrode concentrations over conventional Coulomb counting for accessible capacity estimation are discussed. In addition to the mean state estimation, the framework also provides estimation of the associated confidence bounds that are used to establish predictive capability of the proposed framework.

  2. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  3. Scale-recursive estimation for merging precipitation data from radar and microwave cross-track scanners

    NASA Astrophysics Data System (ADS)

    van de Vyver, H.; Roulin, E.

    2009-04-01

    This paper presents an application of scale-recursive estimation (SRE) used to assimilate rainfall rates within a storm, estimated from the data of two remote sensing devices. These are a ground-based weather radar and a spaceborne microwave cross-track scanner. The rain rate products corresponding to the latter were provided by the EUMETSAT Satellite Application Facility on Support to Operational Hydrology and Water Management. In our approach, we operate directly on the data so that it is not necessary to consider a predefined multiscale model structure. We introduce a simple and computationally efficient procedure to model the variability of the rain rate process in scales. The measurement noise of the radar is estimated by comparing a large number of data sets with rain gauge data. The noise in the microwave measurements is roughly estimated by using upscaled radar data as a reference. Special emphasis is placed on the specification of the multiscale structure of precipitation under sparse or noisy data. The new methodology is compared with the latest SRE method for data fusion of multisensor precipitation estimates. Applications to the Belgian region show the relevance of the new methodology.

  4. Parameter estimation in food science.

    PubMed

    Dolan, Kirk D; Mishra, Dharmendra K

    2013-01-01

    Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.

  5. Quantum estimation of unknown parameters

    NASA Astrophysics Data System (ADS)

    Martínez-Vargas, Esteban; Pineda, Carlos; Leyvraz, François; Barberis-Blostein, Pablo

    2017-01-01

    We discuss the problem of finding the best measurement strategy for estimating the value of a quantum system parameter. In general the optimum quantum measurement, in the sense that it maximizes the quantum Fisher information and hence allows one to minimize the estimation error, can only be determined if the value of the parameter is already known. A modification of the quantum Van Trees inequality, which gives a lower bound on the error in the estimation of a random parameter, is proposed. The suggested inequality allows us to assert if a particular quantum measurement, together with an appropriate estimator, is optimal. An adaptive strategy to estimate the value of a parameter, based on our modified inequality, is proposed.

  6. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    PubMed

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  7. On the structural limitations of recursive digital filters for base flow estimation

    NASA Astrophysics Data System (ADS)

    Su, Chun-Hsu; Costelloe, Justin F.; Peterson, Tim J.; Western, Andrew W.

    2016-06-01

    Recursive digital filters (RDFs) are widely used for estimating base flow from streamflow hydrographs, and various forms of RDFs have been developed based on different physical models. Numerical experiments have been used to objectively evaluate their performance, but they have not been sufficiently comprehensive to assess a wide range of RDFs. This paper extends these studies to understand the limitations of a generalized RDF method as a pathway for future field calibration. Two formalisms are presented to generalize most existing RDFs, allowing systematic tuning of their complexity. The RDFs with variable complexity are evaluated collectively in a synthetic setting, using modeled daily base flow produced by Li et al. (2014) from a range of synthetic catchments simulated with HydroGeoSphere. Our evaluation reveals that there are optimal RDF complexities in reproducing base flow simulations but shows that there is an inherent physical inconsistency within the RDF construction. Even under the idealized setting where true base flow data are available to calibrate the RDFs, there is persistent disagreement between true and estimated base flow over catchments with small base flow components, low saturated hydraulic conductivity of the soil and larger surface runoff. The simplest explanation is that low base flow "signal" in the streamflow data is hard to distinguish, although more complex RDFs can improve upon the simpler Eckhardt filter at these catchments.

  8. Use of Scale Recursive Estimation for assimilation of precipitation data from TRMM (PR and TMI) and NEXRAD

    NASA Astrophysics Data System (ADS)

    Bocchiola, D.

    2007-11-01

    The paper shows an application of Scale Recursive Estimation (SRE) used to assimilate rainfall rates estimated during a storm event from three remote sensing devices. These are the TMI radiometer and the PR radar, carried on board of the TRMM satellite and the KNQA Memphis Weather Surveillance radar, belonging to the NEXRAD network, each one providing rain rate estimates at a different spatial scale. The variability of rain rate process in scales is modeled as a multiplicative random cascade, including spatial intermittence. The observational noise in the estimates is modeled according to a multiplicative error. System estimation, including process and observational noise, is carried out using Maximum Likelihood Estimation implemented by a scale recursive Expectation Maximization (EM) algorithm. As a result, new rainfall rate estimates are obtained that feature decreased estimation error as compared to those coming from each device alone. The performance of the SRE-EM approach is compared with that of the latest methods proposed for data fusion of multisensor estimates. The proposed approach improves the current methods adopted for SRE and provides an alternative for data fusion in the field of precipitation.

  9. User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.

    1982-01-01

    PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.

  10. A landscape-based cluster analysis using recursive search instead of a threshold parameter.

    PubMed

    Gladwin, Thomas E; Vink, Matthijs; Mars, Roger B

    2016-01-01

    Cluster-based analysis methods in neuroimaging provide control of whole-brain false positive rates without the need to conservatively correct for the number of voxels and the associated false negative results. The current method defines clusters based purely on shapes in the landscape of activation, instead of requiring the choice of a statistical threshold that may strongly affect results. Statistical significance is determined using permutation testing, combining both size and height of activation. A method is proposed for dealing with relatively small local peaks. Simulations confirm the method controls the false positive rate and correctly identifies regions of activation. The method is also illustrated using real data. •A landscape-based method to define clusters in neuroimaging data avoids the need to pre-specify a threshold to define clusters.•The implementation of the method works as expected, based on simulated and real data.•The recursive method used for defining clusters, the method used for combining clusters, and the definition of the "value" of a cluster may be of interest for future variations.

  11. An Empirical Comparison between Two Recursive Filters for Attitude and Rate Estimation of Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    2006-01-01

    The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.

  12. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  13. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  14. Performance of signal-to-noise ratio estimation for scanning electron microscope using autocorrelation Levinson-Durbin recursion model.

    PubMed

    Sim, K S; Lim, M S; Yeap, Z X

    2016-07-01

    A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation.

  15. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  16. Method for estimating solubility parameter

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Ingham, J. D.

    1973-01-01

    Semiempirical correlations have been developed between solubility parameters and refractive indices for series of model hydrocarbon compounds and organic polymers. Measurement of intermolecular forces is useful for assessment of material compatibility, glass-transition temperature, and transport properties.

  17. Recursion Mathematics.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1989-01-01

    Discusses the use of the recursive method to permutations of n objects and a problem making c cents in change using pennies and nickels when order is important. Presents a LOGO program for the examples. (YP)

  18. Parameter estimation by genetic algorithms

    SciTech Connect

    Reese, G.M.

    1993-11-01

    Test/Analysis correlation, or structural identification, is a process of reconciling differences in the structural dynamic models constructed analytically (using the finite element (FE) method) and experimentally (from modal test). This is a methodology for assessing the reliability of the computational model, and is very important in building models of high integrity, which may be used as predictive tools in design. Both the analytic and experimental models evaluate the same quantities: the natural frequencies (or eigenvalues, ({omega}{sub i}), and the mode shapes (or eigenvectors, {var_phi}). In this paper, selected frequencies are reconciled in the two models by modifying physical parameters in the FE model. A variety of parameters may be modified such as the stiffness of a joint member or the thickness of a plate. Engineering judgement is required to identify important frequencies, and to characterize the uncertainty of the model design parameters.

  19. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  20. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles

    PubMed Central

    Nam, Kanghyun

    2015-01-01

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246

  1. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles.

    PubMed

    Nam, Kanghyun

    2015-11-11

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.

  2. On recursion

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert

    2014-01-01

    It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID

  3. Estimating random signal parameters from noisy images with nuisance parameters

    PubMed Central

    Whitaker, Meredith Kathryn; Clarkson, Eric; Barrett, Harrison H.

    2008-01-01

    In a pure estimation task, an object of interest is known to be present, and we wish to determine numerical values for parameters that describe the object. This paper compares the theoretical framework, implementation method, and performance of two estimation procedures. We examined the performance of these estimators for tasks such as estimating signal location, signal volume, signal amplitude, or any combination of these parameters. The signal is embedded in a random background to simulate the effect of nuisance parameters. First, we explore the classical Wiener estimator, which operates linearly on the data and minimizes the ensemble mean-squared error. The results of our performance tests indicate that the Wiener estimator can estimate amplitude and shape once a signal has been located, but is fundamentally unable to locate a signal regardless of the quality of the image. Given these new results on the fundamental limitations of Wiener estimation, we extend our methods to include more complex data processing. We introduce and evaluate a scanning-linear estimator that performs impressively for location estimation. The scanning action of the estimator refers to seeking a solution that maximizes a linear metric, thereby requiring a global-extremum search. The linear metric to be optimized can be derived as a special case of maximum a posteriori (MAP) estimation when the likelihood is Gaussian and a slowly varying covariance approximation is made. PMID:18545527

  4. Missing Data and IRT Item Parameter Estimation.

    ERIC Educational Resources Information Center

    DeMars, Christine

    The situation of nonrandomly missing data has theoretically different implications for item parameter estimation depending on whether joint maximum likelihood or marginal maximum likelihood methods are used in the estimation. The objective of this paper is to illustrate what potentially can happen, under these estimation procedures, when there is…

  5. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Parameter Estimation using Numerical Merger Waveforms

    NASA Technical Reports Server (NTRS)

    Thorpe, J. I.; McWilliams, S.; Kelly, B.; Fahey, R.; Arnaud, K.; Baker, J.

    2008-01-01

    Results: Developed parameter estimation model integrating complete waveforms and improved instrumental models. Initial results for equal-mass non-spinning systems indicate moderate improvement in most parameters, significant improvement in some Near-term improvement: a) Improved statistics; b) T-channel; c) Larger parameter space coverage. Combination with other results: a) Higher harmonics; b) Spin precession; c) Instrumental effects.

  7. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  8. Quantifying uncertainty in state and parameter estimation.

    PubMed

    Parlitz, Ulrich; Schumann-Bischoff, Jan; Luther, Stefan

    2014-05-01

    Observability of state variables and parameters of a dynamical system from an observed time series is analyzed and quantified by means of the Jacobian matrix of the delay coordinates map. For each state variable and each parameter to be estimated, a measure of uncertainty is introduced depending on the current state and parameter values, which allows us to identify regions in state and parameter space where the specific unknown quantity can(not) be estimated from a given time series. The method is demonstrated using the Ikeda map and the Hindmarsh-Rose model.

  9. Recursive estimators of mean-areal and local bias in precipitation products that account for conditional bias

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Seo, Dong-Jun

    2017-03-01

    This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.

  10. Fast iterative optimal estimation of turbulence wavefronts with recursive block Toeplitz covariance matrix

    NASA Astrophysics Data System (ADS)

    Conan, Rodolphe

    2014-07-01

    The estimation of a corrugated wavefront after propagation through the atmosphere is usually solved optimally with a Minimum-Mean-Square-Error algorithm. The derivation of the optimal wavefront can be a very computing intensive task especially for large Adaptive Optics (AO) systems that operates in real-time. For the largest AO systems, efficient optimal wavefront reconstructor have been proposed either using sparse matrix techniques or relying on the fractal properties of the atmospheric wavefront. We propose a new method that exploits the Toeplitz structure in the covariance matrix of the wavefront gradient. The algorithm is particularly well-suited to Shack-Hartmann wavefront sensor based AO systems. Thanks to the Toeplitz structure of the covariance, the matrices are compressed up to a thousand-fold and the matrix-to-vector product is reduced to a simple one-dimension convolution product. The optimal wavefront is estimated iteratively with the MINRES algorithm which exhibits better convergence properties for ill-conditioned matrices than the commonly used Conjugate Gradient algorithm. The paper describes, in a first part, the Toeplitz structure of the covariance matrices and shows how to compute the matrix-to-vector product using only the compressed version of the matrices. In a second part, we introduced the MINRES iterative solver and shows how it performs compared to the Conjugate Gradient algorithm for different AO systems.

  11. Stochastic process approximation for recursive estimation with guaranteed bound on the error covariance

    NASA Technical Reports Server (NTRS)

    Menga, G.

    1975-01-01

    An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.

  12. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, Marco D.; Fienen, Michael N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  13. MODFLOW-Style parameters in underdetermined parameter estimation.

    PubMed

    D'Oria, Marco; Fienen, Michael N

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  14. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, M.; Fienen, M.N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.

  15. Reionization history and CMB parameter estimation

    SciTech Connect

    Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y. E-mail: gnedin@fnal.edu

    2013-05-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

  16. GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by

  17. Estimating nuisance parameters in inverse problems

    NASA Astrophysics Data System (ADS)

    Aravkin, Aleksandr Y.; van Leeuwen, Tristan

    2012-11-01

    Many inverse problems include nuisance parameters which, while not of direct interest, are required to recover primary parameters. The structure of these problems allows efficient optimization strategies—a well-known example is variable projection, where nonlinear least-squares problems which are linear in some parameters can be very efficiently optimized. In this paper, we extend the idea of projecting out a subset over the variables to a broad class of maximum likelihood and maximum a posteriori likelihood problems with nuisance parameters, such as variance or degrees of freedom (d.o.f.). As a result, we are able to incorporate nuisance parameter estimation into large-scale constrained and unconstrained inverse problem formulations. We apply the approach to a variety of problems, including estimation of unknown variance parameters in the Gaussian model, d.o.f. parameter estimation in the context of robust inverse problems, and automatic calibration. Using numerical examples, we demonstrate improvement in recovery of primary parameters for several large-scale inverse problems. The proposed approach is compatible with a wide variety of algorithms and formulations, and its implementation requires only minor modifications to existing algorithms.

  18. Estimating Respiratory Mechanical Parameters during Mechanical Ventilation

    PubMed Central

    Barbini, Paolo

    1982-01-01

    We propose an algorithm for the estimation of the parameters of the mechanical respiratory system. The algorithm is based on non linear regression analysis with a two-compartment respiratory system model. The model used allows us to take account of the non homogeneous properties of the lungs which may cause uneven distribution of ventilation and thus affect the gas exchange in the lungs. The estimation of the parameters of such a model permits the optimization of the type of ventilation to be used in patients undergoing respiratory treatment. This can be done bearing in mind the effects of the mechanical ventilation on venous return as well as the quality of gas exchange. We have valued the performances of the estimation algorithm which is proposed on the basis of the agreement between the data and the model response, of the stability of the parameter estimates and of the standard deviations of the parameters. The parameter estimation algorithm described does not have recourse to the examination of the impedance spectra and is completely independent of the type of ventilator employed.

  19. Frequency tracking and parameter estimation for robust quantum state estimation

    SciTech Connect

    Ralph, Jason F.; Jacobs, Kurt; Hill, Charles D.

    2011-11-15

    In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.

  20. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  1. LISA Parameter Estimation using Numerical Merger Waveforms

    NASA Technical Reports Server (NTRS)

    Thorpe, J. I.; McWilliams, S.; Baker, J.

    2008-01-01

    Coalescing supermassive black holes are expected to provide the strongest sources for gravitational radiation detected by LISA. Recent advances in numerical relativity provide a detailed description of the waveforms of such signals. We present a preliminary study of LISA's sensitivity to waveform parameters using a hybrid numerical/analytic waveform describing the coalescence of two equal-mass, nonspinning black holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the waveform parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10(exp 6) deg M solar mass at a redshift of z is approximately 1 were found to decrease by a factor of slightly more than two when the merger was included.

  2. Comparison of Dam Breach Parameter Estimators

    DTIC Science & Technology

    2008-01-01

    from a large storm in 1975 (CEATI). The dam was constructed of a clay core containing shale. The upstream and downstream fill was homogeneous earth ...Comparison of Dam Breach Parameter Estimators D. Michael Gee1 1 Senior Hydraulic Engineer, Corps of Engineers Hydrologic Engineering...Center, 609 2nd St., Davis, CA 95616; email: michael.gee@usace.army.mil. ABSTRACT Analytical techniques for the estimation of dam breach

  3. Precision Parameter Estimation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  4. Attitude Estimation Using Modified Rodrigues Parameters

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Markley, F. Landis

    1996-01-01

    In this paper, a Kalman filter formulation for attitude estimation is derived using the Modified Rodrigues Parameters. The extended Kalman filter uses a gyro-based model for attitude propagation. Two solutions are developed for the sensitivity matrix in the Kalman filter. One is based upon an additive error approach, and the other is based upon a multiplicative error approach. It is shown that the two solutions are in fact equivalent. The Kalman filter is then used to estimate the attitude of a simulated spacecraft. Results indicate that then new algorithm produces accurate attitude estimates by determining actual gyro biases.

  5. ZASPE: Zonal Atmospheric Stellar Parameters Estimator

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordan, Andres; Hartman, Joel; Bakos, Gaspar

    2016-07-01

    ZASPE (Zonal Atmospheric Stellar Parameters Estimator) computes the atmospheric stellar parameters (Teff, log(g), [Fe/H] and vsin(i)) from echelle spectra via least squares minimization with a pre-computed library of synthetic spectra. The minimization is performed only in the most sensitive spectral zones to changes in the atmospheric parameters. The uncertainities and covariances computed by ZASPE assume that the principal source of error is the systematic missmatch between the observed spectrum and the sythetic one that produces the best fit. ZASPE requires a grid of synthetic spectra and can use any pre-computed library minor modifications.

  6. Effects of model deficiencies on parameter estimation

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.

    1988-01-01

    Reliable structural dynamic models will be required as a basis for deriving the reduced-order plant models used in control systems for large space structures. Ground vibration testing and model verification will play an important role in the development of these models; however, fundamental differences between the space environment and earth environment, as well as variations in structural properties due to as-built conditions, will make on-orbit identification essential. The efficiency, and perhaps even the success, of on-orbit identification will depend on having a valid model of the structure. It is envisioned that the identification process will primarily involve parametric methods. Given a correct model, a variety of estimation algorithms may be used to estimate parameter values. This paper explores the effects of modeling errors and model deficiencies on parameter estimation by reviewing previous case histories. The effects depend at least to some extent on the estimation algorithm being used. Bayesian estimation was used in the case histories presented here. It is therefore conceivable that the behavior of an estimation algorithm might be useful in detecting and possibly even diagnosing deficiencies. In practice, the task is complicated by the presence of systematic errors in experimental procedures and data processing and in the use of the estimation procedures themselves.

  7. New approaches to estimation of magnetotelluric parameters

    SciTech Connect

    Egbert, G.D.

    1991-01-01

    Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.

  8. Reliability of parameter estimation in respirometric models.

    PubMed

    Checchi, Nicola; Marsili-Libelli, Stefano

    2005-09-01

    When modelling a biochemical system, the fact that model parameters cannot be estimated exactly stimulates the definition of tests for checking unreliable estimates and design better experiments. The method applied in this paper is a further development from Marsili-Libelli et al. [2003. Confidence regions of estimated parameters for ecological systems. Ecol. Model. 165, 127-146.] and is based on the confidence regions computed with the Fisher or the Hessian matrix. It detects the influence of the curvature, representing the distortion of the model response due to its nonlinear structure. If the test is passed then the estimation can be considered reliable, in the sense that the optimisation search has reached a point on the error surface where the effect of nonlinearities is negligible. The test is used here for an assessment of respirometric model calibration, i.e. checking the experimental design and estimation reliability, with an application to real-life data in the ASM context. Only dissolved oxygen measurements have been considered, because this is a very popular experimental set-up in wastewater modelling. The estimation of a two-step nitrification model using batch respirometric data is considered, showing that the initial amount of ammonium-N and the number of data play a crucial role in obtaining reliable estimates. From this basic application other results are derived, such as the estimation of the combined yield factor and of the second step parameters, based on a modified kinetics and a specific nitrite experiment. Finally, guidelines for designing reliable experiments are provided.

  9. Estimating physiological skin parameters from hyperspectral signatures.

    PubMed

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  10. Estimating physiological skin parameters from hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  11. Aquifer parameter estimation from surface resistivity data.

    PubMed

    Niwas, Sri; de Lima, Olivar A L

    2003-01-01

    This paper is devoted to the additional use, other than ground water exploration, of surface geoelectrical sounding data for aquifer hydraulic parameter estimation. In a mesoscopic framework, approximated analytical equations are developed separately for saline and for fresh water saturations. A few existing useful aquifer models, both for clean and shaley sandstones, are discussed in terms of their electrical and hydraulic effects, along with the linkage between the two. These equations are derived for insight and physical understanding of the phenomenon. In a macroscopic scale, a general aquifer model is proposed and analytical relations are derived for meaningful estimation, with a higher level of confidence, of hydraulic parameter from electrical parameters. The physical reasons for two different equations at the macroscopic level are explicitly explained to avoid confusion. Numerical examples from existing literature are reproduced to buttress our viewpoint.

  12. Discriminative parameter estimation for random walks segmentation.

    PubMed

    Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan

    2013-01-01

    The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.

  13. Adaptive approximation method for joint parameter estimation and identical synchronization of chaotic systems.

    PubMed

    Mariño, Inés P; Míguez, Joaquín

    2005-11-01

    We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.

  14. Computational approaches for RNA energy parameter estimation

    PubMed Central

    Andronescu, Mirela; Condon, Anne; Hoos, Holger H.; Mathews, David H.; Murphy, Kevin P.

    2010-01-01

    Methods for efficient and accurate prediction of RNA structure are increasingly valuable, given the current rapid advances in understanding the diverse functions of RNA molecules in the cell. To enhance the accuracy of secondary structure predictions, we developed and refined optimization techniques for the estimation of energy parameters. We build on two previous approaches to RNA free-energy parameter estimation: (1) the Constraint Generation (CG) method, which iteratively generates constraints that enforce known structures to have energies lower than other structures for the same molecule; and (2) the Boltzmann Likelihood (BL) method, which infers a set of RNA free-energy parameters that maximize the conditional likelihood of a set of reference RNA structures. Here, we extend these approaches in two main ways: We propose (1) a max-margin extension of CG, and (2) a novel linear Gaussian Bayesian network that models feature relationships, which effectively makes use of sparse data by sharing statistical strength between parameters. We obtain significant improvements in the accuracy of RNA minimum free-energy pseudoknot-free secondary structure prediction when measured on a comprehensive set of 2518 RNA molecules with reference structures. Our parameters can be used in conjunction with software that predicts RNA secondary structures, RNA hybridization, or ensembles of structures. Our data, software, results, and parameter sets in various formats are freely available at http://www.cs.ubc.ca/labs/beta/Projects/RNA-Params. PMID:20940338

  15. Online in-situ estimation of network parameters under intermittent excitation conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Jason Ashley

    2008-10-01

    Online in-situ estimation of network parameters is a potential tool to evaluate electrical network and conductor health. The integration of the physics-based models with stochastic models can provide important diagnostic and prognostic information. Correct diagnoses and prognoses using the model-based techniques therefore depend on accurate estimations of the physical parameters. As artificial excitation of the modeled dynamics is not always possible for in-situ applications, the information necessary to make accurate estimations can be intermittent over time. Continuous online estimation and tracking of physics-based parameters using recursive least-squares with directional forgetting is proposed to account for the intermittency in the excitation. This method makes optimal use of the available information while still allowing the solution to following time-varying parameter changes. Computationally efficient statistical inference measures are also provided to gauge the confidence of each parameter estimate. Additionally, identification requirements of the methods and multiple network and conductor models are determined. Finally, the method is shown to be effective in estimating and tracking parameter changes in both the DC and AC networks as well as both time and frequency domain models.

  16. Target parameter and error estimation using magnetometry

    NASA Astrophysics Data System (ADS)

    Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

  17. Cosmological parameter estimation: impact of CMB aberration

    SciTech Connect

    Catena, Riccardo; Notari, Alessio E-mail: notari@ffn.ub.es

    2013-04-01

    The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a{sub lm}'s via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.

  18. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests

  19. Renal parameter estimates in unrestrained dogs

    NASA Technical Reports Server (NTRS)

    Rader, R. D.; Stevens, C. M.

    1974-01-01

    A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.

  20. CosmoSIS: Modular cosmological parameter estimation

    SciTech Connect

    Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.

    2015-06-09

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis

  1. Bayesian parameter estimation for effective field theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  2. CosmoSIS: Modular cosmological parameter estimation

    DOE PAGES

    Zuntz, J.; Paterno, M.; Jennings, E.; ...

    2015-06-09

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less

  3. Rapid Compact Binary Coalescence Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong

    2016-03-01

    The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.

  4. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  5. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  6. Estimated hydrogeological parameters by artificial neurons network

    NASA Astrophysics Data System (ADS)

    Lin, H.; Chen, C.; Tan, Y.; Ke, K.

    2009-12-01

    In recent years, many approaches had been developed using artificial neurons network (ANN) model cooperated with Theis analytical solution to estimate the effective hydrological parameters for the homogenous and isotropic porous media, such as Lin and Chen approach [Lin and Chen, 2006] (or called the ANN approach hereafter), PC-ANN approach [Samani et al., 2008]. The above methods assumed a full superimposition of the type curve and the observed drawdown, and tried to use the first time-drawdown data as a match point to make a fine approximation of the effective parameters. However, using the first time-drawdown data or the early time-drawdown data is not always correct for the estimation of the hydrological parameters, especially for heterogeneous and anisotropic aquifers. Therefore, this paper mainly corrected the concept of superimposed plot by modifying the ANN approach and PC-ANN approach, as well as cooperating with Papadopoulos analytical solution, to estimate the transmissivities and storage coefficient for anisotropic, heterogeneous aquifers. The ANN model is trained with 4000 training sets of the well function, and tested with 1000 sets and 300 sets of synthetic time-drawdown generated from homogonous and heterogonous parameters, respectively. In-situ observation data, the time-drawdown at station Shi-Chou of the Chihuahua River alluvial fan, Taiwan, is further adopted to test the applicability and reliability of proposed methods, as well as comparing with Straight-line method and Type-curve method. Results suggested that both of the modified methods had better performance than the original ones. Using late time drawdown to optimize the effective parameters is shown better than using early-time drawdown. Additionally, results indicated that the modified ANN approach is better than the modified PC-ANN approach in terms of precision, while the efficiency of the modified PC-ANN approach is approximately three times better than the modified ANN approach.

  7. Parameter Estimation of a Spiking Silicon Neuron

    PubMed Central

    Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph

    2012-01-01

    Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978

  8. Online Dynamic Parameter Estimation of Synchronous Machines

    NASA Astrophysics Data System (ADS)

    West, Michael R.

    Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.

  9. Parameter estimate of signal transduction pathways

    PubMed Central

    Arisi, Ivan; Cattaneo, Antonino; Rosato, Vittorio

    2006-01-01

    Background The "inverse" problem is related to the determination of unknown causes on the bases of the observation of their effects. This is the opposite of the corresponding "direct" problem, which relates to the prediction of the effects generated by a complete description of some agencies. The solution of an inverse problem entails the construction of a mathematical model and takes the moves from a number of experimental data. In this respect, inverse problems are often ill-conditioned as the amount of experimental conditions available are often insufficient to unambiguously solve the mathematical model. Several approaches to solving inverse problems are possible, both computational and experimental, some of which are mentioned in this article. In this work, we will describe in details the attempt to solve an inverse problem which arose in the study of an intracellular signaling pathway. Results Using the Genetic Algorithm to find the sub-optimal solution to the optimization problem, we have estimated a set of unknown parameters describing a kinetic model of a signaling pathway in the neuronal cell. The model is composed of mass action ordinary differential equations, where the kinetic parameters describe protein-protein interactions, protein synthesis and degradation. The algorithm has been implemented on a parallel platform. Several potential solutions of the problem have been computed, each solution being a set of model parameters. A sub-set of parameters has been selected on the basis on their small coefficient of variation across the ensemble of solutions. Conclusion Despite the lack of sufficiently reliable and homogeneous experimental data, the genetic algorithm approach has allowed to estimate the approximate value of a number of model parameters in a kinetic model of a signaling pathway: these parameters have been assessed to be relevant for the reproduction of the available experimental data. PMID:17118160

  10. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  11. Parameter estimation in tree graph metabolic networks

    PubMed Central

    Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings. PMID:27688960

  12. Uncertainty relation based on unbiased parameter estimations

    NASA Astrophysics Data System (ADS)

    Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing

    2017-02-01

    Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.

  13. Parameter estimation for lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Santhanagopalan, Shriram

    With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of

  14. Compressing measurements in quantum dynamic parameter estimation

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Cooper, Alexandre; Cappellaro, Paola

    2013-12-01

    We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with quantum control methods to significantly reduce the number of signal coefficients that are required for reconstruction of time-varying parameters with high fidelity. We show that incoherent measurement bases and, more generally, suitable random measurement matrices can be created by performing simple control sequences on the quantum system. Random measurement matrices satisfying the restricted isometry property can be used efficiently to reconstruct signals that are sparse in any basis. Because many physical processes are approximately sparse in some basis, these methods can benefit a variety of applications such as quantum sensing and magnetometry with nitrogen-vacancy centers.

  15. Estimation of Model Parameters for Steerable Needles

    PubMed Central

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  16. Parameter estimation techniques for LTP system identification

    NASA Astrophysics Data System (ADS)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  17. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  18. Poincaré dodecahedral space parameter estimates

    NASA Astrophysics Data System (ADS)

    Roukema, B. F.; Buliński, Z.; Gaudin, N. E.

    2008-12-01

    Context: Several studies have proposed that the preferred model of the comoving spatial 3-hypersurface of the Universe may be a Poincaré dodecahedral space (PDS) rather than a simply connected, infinite, flat space. Aims: Here, we aim to improve the surface of last scattering (SLS) optimal cross-correlation method and apply this to observational data and simulations. Methods: For a given “generalised” PDS orientation, we analytically derive the formulae required to exclude points on the sky that cannot be members of close SLS-SLS cross-pairs. These enable more efficient pair selection without sacrificing the uniformity of the underlying selection process. For a sufficiently small matched circle size α and a fixed number of randomly placed points selected for a cross-correlation estimate, the calculation time is decreased and the number of pairs per separation bin is increased. Using this faster method, and including the smallest separation bin when testing correlations, (i) we recalculate Monte Carlo Markov Chains (MCMC) on the five-year Wilkinson Microwave Anisotropy Probe (WMAP) data; and (ii) we seek PDS solutions in a small number of Gaussian random fluctuation (GRF) simulations in order to further explore the statistical significance of the PDS hypothesis. Results: For 5° < α < 60^circ, a calculation speed-up of 3-10 is obtained. (i) The best estimates of the PDS parameters for the five-year WMAP data are similar to those for the three-year data; (ii) comparison of the optimal solutions found by the MCMC chains in the observational map to those found in the simulated maps yields a slightly stronger rejection of the simply connected model using α rather than the twist angle φ. The best estimate of α implies that, given a large-scale auto-correlation as weak as that observed, the PDS-like cross-correlation signal in the WMAP data is expected with a probability of less than about 10%. The expected distribution of φ from the GRF simulations is not

  19. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  20. Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters

    DTIC Science & Technology

    1975-12-31

    AD-A021 208 POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS Ramesh Chandra Union College...I 00 064098 O < POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS AES-7514 ■ - 1976...AES-7514 2 COVT ACCESSION NO * TITLC fan« Submit) Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters

  1. Maximum Likelihood and Bayesian Parameter Estimation in Item Response Theory.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    There are currently three main approaches to parameter estimation in item response theory (IRT): (1) joint maximum likelihood, exemplified by LOGIST, yielding maximum likelihood estimates; (2) marginal maximum likelihood, exemplified by BILOG, yielding maximum likelihood estimates of item parameters (ability parameters can be estimated…

  2. System and method for motor parameter estimation

    DOEpatents

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  3. Bayesian approach to decompression sickness model parameter estimation.

    PubMed

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  4. Estimation of high altitude Martian dust parameters

    NASA Astrophysics Data System (ADS)

    Pabari, Jayesh; Bhalodi, Pinali

    2016-07-01

    Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.

  5. Parameter Estimation for the Four Parameter Beta Distribution.

    DTIC Science & Technology

    1983-12-01

    ESTIMATOR* MME3 SFED 2 TRUE P: .500 TRUE Q’ .500 MEAN MEAN SOIJQARE STD ERROR STD DEV -°0405 .0448 .0093 . 2078 .0591 *1161 .0150 .3356 .2935 9.8500...64128 o1562 3o4931 -4,6183 22o2167 .0421 . 9424 .0002 .0000 .0000 00010 CORRELATION COEFFICIENTS* 11 000 -. 006 1.000 -. 912 .111 1.000 -. 223 .860 .413

  6. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  7. Maximum likelihood estimates of polar motion parameters

    NASA Technical Reports Server (NTRS)

    Wilson, Clark R.; Vicente, R. O.

    1990-01-01

    Two estimators developed by Jeffreys (1940, 1968) are described and used in conjunction with polar-motion data to determine the frequency (Fc) and quality factor (Qc) of the Chandler wobble. Data are taken from a monthly polar-motion series, satellite laser-ranging results, and optical astrometry and intercompared for use via interpolation techniques. Maximum likelihood arguments were employed to develop the estimators, and the assumption that polar motion relates to a Gaussian random process is assessed in terms of the accuracies of the estimators. The present results agree with those from Jeffreys' earlier study but are inconsistent with the later estimator; a Monte Carlo evaluation of the estimators confirms that the 1968 method is more accurate. The later estimator method shows good performance because the Fourier coefficients derived from the data have signal/noise levels that are superior to those for an individual datum. The method is shown to be valuable for general spectral-analysis problems in which isolated peaks must be analyzed from noisy data.

  8. Space shuttle propulsion parameter estimation using optional estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A regression analyses on tabular aerodynamic data provided. A representative aerodynamic model for coefficient estimation. It also reduced the storage requirements for the "normal' model used to check out the estimation algorithms. The results of the regression analyses are presented. The computer routines for the filter portion of the estimation algorithm and the :"bringing-up' of the SRB predictive program on the computer was developed. For the filter program, approximately 54 routines were developed. The routines were highly subsegmented to facilitate overlaying program segments within the partitioned storage space on the computer.

  9. Noniterative estimation of a nonlinear parameter

    NASA Technical Reports Server (NTRS)

    Bergstroem, A.

    1973-01-01

    An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.

  10. A Comparative Study of Distribution System Parameter Estimation Methods

    SciTech Connect

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  11. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  12. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  13. Muscle parameters estimation based on biplanar radiography.

    PubMed

    Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W

    2016-11-01

    The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.

  14. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    NASA Astrophysics Data System (ADS)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify "Inertia Parameters (IPs)" of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), "active space-debris removal" or "automated in-orbit assemblies". In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new "modular formulation" has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a "Modular Set" of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for "simultaneous estimation processes" using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  15. Space Shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The fifth monthly progress report includes corrections and additions to the previously submitted reports. The addition of the SRB propellant thickness as a state variable is included with the associated partial derivatives. During this reporting period, preliminary results of the estimation program checkout was presented to NASA technical personnel.

  16. Bias in parameter estimation of form errors

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min

    2014-09-01

    The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.

  17. Online parameter estimation for surgical needle steering model.

    PubMed

    Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing

    2006-01-01

    Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.

  18. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    NASA Astrophysics Data System (ADS)

    Phanomchoeng, Gridsada

    presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.

  19. Recursion, Language, and Starlings

    ERIC Educational Resources Information Center

    Corballis, Michael C.

    2007-01-01

    It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…

  20. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  1. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  2. Attitudinal Data: Dimensionality and Start Values for Estimating Item Parameters.

    ERIC Educational Resources Information Center

    Nandakumar, Ratna; Hotchkiss, Larry; Roberts, James S.

    The purpose of this study was to assess the dimensionality of attitudinal data arising from unfolding models for discrete data and to compute rough estimates of item and individual parameters for use as starting values in other estimation parameters. One- and two-dimensional simulated test data were analyzed in this study. Results of limited…

  3. Estimating a weighted average of stratum-specific parameters.

    PubMed

    Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul

    2008-10-30

    This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.

  4. Equating Parameter Estimates from the Generalized Graded Unfolding Model.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…

  5. SFM signal parameter estimation based on an enhanced DSFMT algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Li, Xingguang; Chen, Dianren

    2017-01-01

    It is proposed a SFM signal parameter estimation method based on the Enhanced DSFMT(EDSFMT) algorithm and provided the derivation of transformation formulas in this paper .Analysis and simulations were performed, which proved its capability of arbitrary multi-component SFM signal parameter estimation.

  6. alphaPDE: A new multivariate technique for parameter estimation

    SciTech Connect

    Knuteson, B.; Miettinen, H.; Holmstrom, L.

    2002-06-01

    We present alphaPDE, a new multivariate analysis technique for parameter estimation. The method is based on a direct construction of joint probability densities of known variables and the parameters to be estimated. We show how posterior densities and best-value estimates are then obtained for the parameters of interest by a straightforward manipulation of these densities. The method is essentially non-parametric and allows for an intuitive graphical interpretation. We illustrate the method by outlining how it can be used to estimate the mass of the top quark, and we explain how the method is applied to an ensemble of events containing background.

  7. Estimating Groundwater Flow Parameters Using Response Surface Methodology

    DTIC Science & Technology

    1994-04-01

    Best Available Copy AD-A280 630 DTI ELECT’ JUN2 4 ESTIMATING GROUNDWATER FLOW PARAMETERS USING RESPONSE SURFACE METHODOLOGY THESIS Leo C. Adams...GROUNDWATER FLOW PARAMETERS USING RESPONSE SURFACE METHODOLOGY THESIS Presented to the Faculty of the Graduate School of Engineering of the Air Force Institute...Estimating Groundwater Flow Parameters Using Response Surface Methodology Committee Name/Department Signature dvisor. I Col Paul F. Auclair, Ph.D. j . j

  8. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  9. Research on the estimation method for Earth rotation parameters

    NASA Astrophysics Data System (ADS)

    Yao, Yibin

    2008-12-01

    In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.

  10. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  11. Recursive Deadbeat Controller Design

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh Q.

    1997-01-01

    This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.

  12. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  13. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  14. Astrophysical Parameter Estimation for Gaia using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Tiede, C.; Smith, K.; Bailer-Jones, C. A. L.

    2008-08-01

    Gaia is the next astrometric mission from ESA and will measure objects up to a magnitude of about G=20. Depending on the kind of object (which will be determined automatically because Gaia does not hold an input catalogue), the specific astrophysical parameters will be estimated. The General Stellar Parametrizer (GSP-phot) estimates the astrophysical parameters based on low-dispersion spectra and parallax information for single stars. We show the results of machine learning algorithms trained on simulated data and further developments of the core algorithms which improve the accuracy of the estimated astrophysical parameters.

  15. A Joint Analytic Method for Estimating Aquitard Hydraulic Parameters.

    PubMed

    Zhuang, Chao; Zhou, Zhifang; Illman, Walter A

    2017-01-10

    The vertical hydraulic conductivity (Kv ), elastic (Sske ), and inelastic (Sskv ) skeletal specific storage of aquitards are three of the most critical parameters in land subsidence investigations. Two new analytic methods are proposed to estimate the three parameters. The first analytic method is based on a new concept of delay time ratio for estimating Kv and Sske of an aquitard subject to long-term stable, cyclic hydraulic head changes at boundaries. The second analytic method estimates the Sskv of the aquitard subject to linearly declining hydraulic heads at boundaries. Both methods are based on analytical solutions for flow within the aquitard, and they are jointly employed to obtain the three parameter estimates. This joint analytic method is applied to estimate the Kv , Sske , and Sskv of a 34.54-m thick aquitard for which the deformation progress has been recorded by an extensometer located in Shanghai, China. The estimated results are then calibrated by PEST (Doherty 2005), a parameter estimation code coupled with a one-dimensional aquitard-drainage model. The Kv and Sske estimated by the joint analytic method are quite close to those estimated via inverse modeling and performed much better in simulating elastic deformation than the estimates obtained from the stress-strain diagram method of Ye and Xue (2005). The newly proposed joint analytic method is an effective tool that provides reasonable initial values for calibrating land subsidence models.

  16. A variational approach to parameter estimation in ordinary differential equations

    PubMed Central

    2012-01-01

    Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields. PMID:22892133

  17. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  18. Kalman filter data assimilation: Targeting observations and parameter estimation

    SciTech Connect

    Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  19. On a variational approach to some parameter estimation problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.

    1985-01-01

    Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.

  20. Numerical Testing of Parameterization Schemes for Solving Parameter Estimation Problems

    DTIC Science & Technology

    2008-12-01

    1 NUMERICAL TESTING OF PARAMETERIZATION SCHEMES FOR SOLVING PARAMETER ESTIMATION PROBLEMS L. Velázquez*, M. Argáez and C. Quintero The...performance computing (HPC). 1. INTRODUCTION In this paper we present the numerical performance of three parameterization approaches, SVD...wavelets, and the combination of wavelet-SVD for solving automated parameter estimation problems based on the SPSA described in previous reports of this

  1. Distinctive signatures of recursion

    PubMed Central

    Martins, Maurício Dias

    2012-01-01

    Although recursion has been hypothesized to be a necessary capacity for the evolution of language, the multiplicity of definitions being used has undermined the broader interpretation of empirical results. I propose that only a definition focused on representational abilities allows the prediction of specific behavioural traits that enable us to distinguish recursion from non-recursive iteration and from hierarchical embedding: only subjects able to represent recursion, i.e. to represent different hierarchical dependencies (related by parenthood) with the same set of rules, are able to generalize and produce new levels of embedding beyond those specified a priori (in the algorithm or in the input). The ability to use such representations may be advantageous in several domains: action sequencing, problem-solving, spatial navigation, social navigation and for the emergence of conventionalized communication systems. The ability to represent contiguous hierarchical levels with the same rules may lead subjects to expect unknown levels and constituents to behave similarly, and this prior knowledge may bias learning positively. Finally, a new paradigm to test for recursion is presented. Preliminary results suggest that the ability to represent recursion in the spatial domain recruits both visual and verbal resources. Implications regarding language evolution are discussed. PMID:22688640

  2. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application.

  3. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  4. A Simple Technique for Estimating Latent Trait Mental Test Parameters

    ERIC Educational Resources Information Center

    Jensema, Carl

    1976-01-01

    A simple and economical method for estimating initial parameter values for the normal ogive or logistic latent trait mental test model is outlined. The accuracy of the method in comparison with maximum likelihood estimation is investigated through the use of Monte-Carlo data. (Author)

  5. A Comparison of Approximate Interval Estimators for the Bernoulli Parameter

    DTIC Science & Technology

    1993-12-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate...is appropriate for certain sample sizes and point estimators. Confidence interval , Binomial distribution, Bernoulli distribution, Poisson distribution.

  6. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  7. Parameter estimation of gravitational wave compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Haster, Carl-Johan; LIGO Scientific Collaboration Collaboration

    2017-01-01

    The first detections of gravitational waves from coalescing binary black holes have allowed unprecedented inference on the astrophysical parameters of such binaries. Given recent updates in detector capabilities, gravitational wave model templates and data analysis techniques, in this talk I will describe the prospects of parameter estimation of compact binary coalescences during the second observation run of the LIGO-Virgo collaboration.

  8. Estimation of the input parameters in the Feller neuronal model

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  9. Analyzing and constraining signaling networks: parameter estimation for the user.

    PubMed

    Geier, Florian; Fengos, Georgios; Felizzi, Federico; Iber, Dagmar

    2012-01-01

    The behavior of most dynamical models not only depends on the wiring but also on the kind and strength of interactions which are reflected in the parameter values of the model. The predictive value of mathematical models therefore critically hinges on the quality of the parameter estimates. Constraining a dynamical model by an appropriate parameterization follows a 3-step process. In an initial step, it is important to evaluate the sensitivity of the parameters of the model with respect to the model output of interest. This analysis points at the identifiability of model parameters and can guide the design of experiments. In the second step, the actual fitting needs to be carried out. This step requires special care as, on the one hand, noisy as well as partial observations can corrupt the identification of system parameters. On the other hand, the solution of the dynamical system usually depends in a highly nonlinear fashion on its parameters and, as a consequence, parameter estimation procedures get easily trapped in local optima. Therefore any useful parameter estimation procedure has to be robust and efficient with respect to both challenges. In the final step, it is important to access the validity of the optimized model. A number of reviews have been published on the subject. A good, nontechnical overview is provided by Jaqaman and Danuser (Nat Rev Mol Cell Biol 7(11):813-819, 2006) and a classical introduction, focussing on the algorithmic side, is given in Press (Numerical recipes: The art of scientific computing, Cambridge University Press, 3rd edn., 2007, Chapters 10 and 15). We will focus on the practical issues related to parameter estimation and use a model of the TGFβ-signaling pathway as an educative example. Corresponding parameter estimation software and models based on MATLAB code can be downloaded from the authors's web page ( http://www.bsse.ethz.ch/cobi ).

  10. Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine

    2002-01-01

    The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.

  11. Sequential ensemble-based optimal design for parameter estimation

    NASA Astrophysics Data System (ADS)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao; Wu, Laosheng

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  12. Iterative methods for distributed parameter estimation in parabolic PDE

    SciTech Connect

    Vogel, C.R.; Wade, J.G.

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  13. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  14. Simultaneous parameter and state estimation of shear buildings

    NASA Astrophysics Data System (ADS)

    Concha, Antonio; Alvarez-Icaza, Luis; Garrido, Rubén

    2016-03-01

    This paper proposes an adaptive observer that simultaneously estimates the damping/mass and stiffness/mass ratios, and the state of a seismically excited building. The adaptive observer uses only acceleration measurements of the ground and floors for both parameter and state estimation; it identifies all the parameter ratios, velocities and displacements of the structure if all the floors are instrumented; and it also estimates the state and the damping/mass and stiffness/mass ratios of a reduced model of the building if only some floors are equipped with accelerometers. This observer does not resort to any particular canonical form and employs the Least Squares (LS) algorithm and a Luenberger state estimator. The LS method is combined with a smooth parameter projection technique that provides only positive estimates, which are employed by the state estimator. Boundedness of the estimate produced by the LS algorithm does not depend on the boundedness of the state estimates. Moreover, the LS method uses a parametrization based on Linear Integral Filters that eliminate offsets in the acceleration measurements in finite time and attenuate high-frequency measurement noise. Experimental results obtained using a reduced-scale five-story confirm the effectiveness of the proposed adaptive observer.

  15. Simulation and parameter estimation of dynamics of synaptic depression.

    PubMed

    Aristizabal, F; Glavinovic, M I

    2004-01-01

    Synaptic release was simulated using a Simulink sequential storage model with three vesicular pools. Modeling was modular and easily extendable to the systems with greater number of vesicular pools, parallel input, or time-varying parameters. Given an input (short or long tetanic trains, patterned or random stimulation) and the storage model, the vesicular release, the replenishment of various vesicular pools, and the vesicular content of all pools could be simulated for the time-invariant and time-varying storage systems. From the input stimuli and either a noiseless or a noisy output, the parameters of such storage systems could also be estimated using the optimization technique that minimizes in the least square sense the error between the observed release and the predicted release. All parameters of the storage model could be evaluated with sufficiently long input-output data pairs. Not surprisingly, the parameters characterizing the processes near the release locus, such as the fractional release and the size of the immediately available pool and its coupling to the small store, as well as the state variables associated with the immediately available pool, such as its vesicular content and replenishment, could be determined with fewer stimuli. The possibility of estimating parameters with random inputs extends the applicability of the method to in vivo synapses with the physiological inputs. The parameter estimation was also possible under the time-variant, but slowly changing, conditions as well as for open systems that are part of larger vesicular storage systems but whose parameters can either not be reliably determined or are of no interest. The quality of parameter estimation was monitored continuously by comparing the observed and predicted output and/or estimated parameters with the true values. Finally, the method was tested experimentally using the rat phrenic-diaphragm neuromuscular junction.

  16. Quantiles, Parametric-Select Density Estimations, and Bi-Information Parameter Estimators.

    DTIC Science & Technology

    1982-06-01

    A non- parametric estimation method forms estimators which are not based on parametric models. Important examples of non-parametric estimators of a...raw descriptive functions F, f, Q, q, fQ. One distinguishes between parametric and non-parametric methods of estimating smooth functions. A parametric ... estimation method : (1) assumes a family F8, fo’ Q0, qo’ foQ8 of functions, called parametric models, which are indexed by a parameter 6 = ( l

  17. Evaluation of the Covariance Matrix of Estimated Resonance Parameters

    NASA Astrophysics Data System (ADS)

    Becker, B.; Capote, R.; Kopecky, S.; Massimi, C.; Schillebeeckx, P.; Sirakov, I.; Volev, K.

    2014-04-01

    In the resonance region nuclear resonance parameters are mostly obtained by a least square adjustment of a model to experimental data. Derived parameters can be mutually correlated through the adjustment procedure as well as through common experimental or model uncertainties. In this contribution we investigate four different methods to propagate the additional covariance caused by experimental or model uncertainties into the evaluation of the covariance matrix of the estimated parameters: (1) including the additional covariance into the experimental covariance matrix based on calculated or theoretical estimates of the data; (2) including the uncertainty affected parameter in the adjustment procedure; (3) evaluation of the full covariance matrix by Monte Carlo sampling of the common parameter; and (4) retroactively including the additional covariance by using the marginalization procedure of Habert et al.

  18. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  19. Dynamic simulation and parameter estimation in river streams.

    PubMed

    Karadurmus, E; Berber, R

    2004-04-01

    Predictions and quality management issues for environmental protection in river basins rely on water-quality models. The key step in model calibration and verification is obtaining the right values of the model parameters. Current practice in model calibration is such that the reaction coefficients are adjusted by trial-and-error until the predicted values and measured data are within a pre-selected margin of error, and this may be a very time consuming task. This study is directed towards developing a parameter estimation strategy coupled with the simulation of water quality models so that the heavy burden of finding reaction rate coefficients is overcome. Dynamic mass balances for different forms of nitrogen and phosphorus, biological oxygen demand, dissolved oxygen, coliforms, nonconservative constituent and algae were written for a single computational element. The model parameters conforming to those in QUAL2E water quality model were estimated by a nonlinear multi-response parameter estimation strategy coupled with a stiff integrator. Yesilirmak river basin around the city of Amasya in Turkey served as the prototype system for the model development. Samples were collected simultaneously from two stations, and concentrations of many water-quality constituents were determined either on-site or in laboratory. This dynamic data was then used for numerical parameter estimation during computer simulation. When the model was simulated with the estimated parameters, it was seen that the model was quite able to predict the dynamics of major water quality constituents. It is concluded that the proposed method shows promise for automatically generating reliable estimates of model parameters.

  20. Recursive SAR imaging

    NASA Astrophysics Data System (ADS)

    Moses, Randolph L.; Ash, Joshua N.

    2008-04-01

    We investigate a recursive procedure for synthetic aperture imaging. We consider a concept in which a SAR system persistently interrogates a scene, for example as it flies along or around that scene. In traditional SAR imaging, the radar measurements are processed in blocks, by partitioning the data into a set of non-overlapping or overlapping azimuth angles, then processing each block. We consider a recursive update approach, in which the SAR image is continually updated, as a linear combination of a small number of previous images and a term containing the current radar measurement. We investigate the crossrange sidelobes realized by such an imaging approach. We show that a first-order autoregression of the image gives crossrange sidelobes similar to a rectangular azimuth window, while a third-order autoregression gives sidelobes comparable to those obtained from widely-used windows in block-processing image formation. The computational and memory requirements of the recursive imaging approach are modest - on the order of M • N2 where M is the recursion order (typically <= 3) and N2 is the image size. We compare images obtained from the recursive and block processing techniques, both for a synthetic scene and for X-band SAR measurements from the Gotcha data set.

  1. Analysis of the Second Model Parameter Estimation Experiment Workshop Results

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.

    2002-05-01

    The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.

  2. Evaluating parasite densities and estimation of parameters in transmission systems.

    PubMed

    Heinzmann, D; Torgerson, P R

    2008-09-01

    Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence) or estimates of the parasite biomass within the host population (abundance or intensity estimates). Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.

  3. Optimal state estimation for networked systems with random parameter matrices, correlated noises and delayed measurements

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2015-02-01

    In this paper, the optimal least-squares state estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems with state transition and measurement random parameter matrices and correlated noises. It is assumed that at any sampling time, as a consequence of possible failures during the transmission process, one-step delays with different delay characteristics may occur randomly in the received measurements. The random delay phenomenon is modelled by using a different sequence of Bernoulli random variables in each sensor. The process noise and all the sensor measurement noises are one-step autocorrelated and different sensor noises are one-step cross-correlated. Also, the process noise and each sensor measurement noise are two-step cross-correlated. Based on the proposed model and using an innovation approach, the optimal linear filter is designed by a recursive algorithm which is very simple computationally and suitable for online applications. A numerical simulation is exploited to illustrate the feasibility of the proposed filtering algorithm.

  4. Rapid estimation of drifting parameters in continuously measured quantum systems

    NASA Astrophysics Data System (ADS)

    Cortez, Luis; Chantasri, Areeya; García-Pintos, Luis Pedro; Dressel, Justin; Jordan, Andrew N.

    2017-01-01

    We investigate the determination of a Hamiltonian parameter in a quantum system undergoing continuous measurement. We demonstrate a computationally rapid method to estimate an unknown and possibly time-dependent parameter, where we maximize the likelihood of the observed stochastic readout. By dealing directly with the raw measurement record rather than the quantum-state trajectories, the estimation can be performed while the data are being acquired, permitting continuous tracking of the parameter during slow drifts in real time. Furthermore, we incorporate realistic nonidealities, such as decoherence processes and measurement inefficiency. As an example, we focus on estimating the value of the Rabi frequency of a continuously measured qubit and compare maximum likelihood estimation to a simpler fast Fourier transform. Using this example, we discuss how the quality of the estimation depends on both the strength and the duration of the measurement; we also discuss the trade-off between the accuracy of the estimate and the sensitivity to drift as the estimation duration is varied.

  5. Cramer-Rao bound on watermark desynchronization parameter estimation accuracy

    NASA Astrophysics Data System (ADS)

    Sadasivam, Shankar; Moulin, Pierre

    2007-02-01

    Various decoding algorithms have been proposed in the literature to combat desynchronization attacks on quantization index modulation (QIM) blind watermarking schemes. Nevertheless, these results have been fairly poor so far. The need to investigate fundamental limitations on the decoder's performance under a desynchronization attack is thus clear. In this paper, we look at the class of estimator-decoders which estimate the desynchronization attack parameter(s) for using in the decoding step. We model the desynchronization attack as an arbitrary (but invertible) linear time-invariant (LTI) system. We then come up with an encoding-decoding scheme for these attacks on cubic QIM watermarking schemes, and derive Cramer-Rao bounds on the estimation error for the desynchronization parameter at the decoder. As an example, we consider the case of a cyclic shift attack and present some numerical findings.

  6. AMT-200S Motor Glider Parameter and Performance Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2011-01-01

    Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.

  7. Language and Recursion

    NASA Astrophysics Data System (ADS)

    Lowenthal, Francis

    2010-11-01

    This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.

  8. Estimation of octanol/water partition coefficients using LSER parameters

    USGS Publications Warehouse

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  9. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  10. Estimation of effective hydrogeological parameters in heterogeneous and anisotropic aquifers

    NASA Astrophysics Data System (ADS)

    Lin, Hsien-Tsung; Tan, Yih-Chi; Chen, Chu-Hui; Yu, Hwa-Lung; Wu, Shih-Ching; Ke, Kai-Yuan

    2010-07-01

    SummaryObtaining reasonable hydrological input parameters is a key challenge in groundwater modeling. Analysis of temporal evolution during pump-induced drawdown is one common approach used to estimate the effective transmissivity and storage coefficients in a heterogeneous aquifer. In this study, we propose a Modified Tabu search Method (MTM), an improvement drawn from an alliance between the Tabu Search (TS) and the Adjoint State Method (ASM) developed by Tan et al. (2008). The latter is employed to estimate effective parameters for anisotropic, heterogeneous aquifers. MTM is validated by several numerical pumping tests. Comparisons are made to other well-known techniques, such as the type-curve method (TCM) and the straight-line method (SLM), to provide insight into the challenge of determining the most effective parameter for an anisotropic, heterogeneous aquifer. The results reveal that MTM can efficiently obtain the best representative and effective aquifer parameters in terms of the least mean square errors of the drawdown estimations. The use of MTM may involve less artificial errors than occur with TCM and SLM, and lead to better solutions. Therefore, effective transmissivity is more likely to be comprised of the geometric mean of all transmissivities within the cone of depression based on a precise estimation of MTM. Further investigation into the applicability of MTM shows that a higher level of heterogeneity in an aquifer can induce an uncertainty in estimations, while the changes in correlation length will affect the accuracy of MTM only once the degree of heterogeneity has also risen.

  11. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.

  12. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  13. Estimation and Analysis of Parameters for Reference Frame Transformation

    NASA Astrophysics Data System (ADS)

    Yang, T. G.; Gao, Y. P.; Tong, M. L.; Zhao, C. S.; Gao, F.

    2016-07-01

    Based on the estimation method of parameters for reference frame transformation, the parameters used for transformation between different modern DE (Develop-ment Ephemeris) ephemeris pairs are derived using the data of heliocentric coordinates of Earth-Moon barycenter from DE ephemeris pairs, and the transformation parameters between DE ephemeris dynamic reference frame and ICRF (International Celestial Reference Frame) are estimated by using the timing data and VLBI (Very Long Baseline Interferometry) observation results of millisecond pulsars. The estimated parameters for the reference frame transformation include three rotational angles of rotational matrix and their derivatives of time. The reference epoch of estimated parameters for the reference frame transformation is MJD51545, that is J2000.0. Our results show that the absolute maximum value of rotational angles for the transformation of DE200 to DE405 ephemeris is 13 mas, and its derivative of time is -0.0007 mas/d. No absolute value of rotational angles is larger than 0.1 mas for the transformation of DE414 to DE421 ephemeris. The absolute maximum value of rotational angles of rotation matrix for the transformation of DE421 ephemeris to ICRF is 3 mas, and the time derivatives of three rotational angles are also necessarily included.

  14. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  15. Human ECG signal parameters estimation during controlled physical activity

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  16. Adaptive Detection and Parameter Estimation for Multidimensional Signal Models

    DTIC Science & Technology

    1989-04-19

    expected value of the non-adaptive parameter array estimator directly from Equation (5-1), using the fact that .zP = dppH = d We obtain EbI = (e-H E eI 1...depend only on the dimensional parameters of tlc problem. We will caerive these properties shcrLly, but first we wish to express the conditional pdf

  17. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  18. Bayesian Estimation in the One-Parameter Latent Trait Model.

    DTIC Science & Technology

    1980-03-01

    3 MASSACHUSETTS LNIV AMHERST LAB OF PSYCHOMETRIC AND -- ETC F/G 12/1 BAYESIAN ESTIMATION IN THE ONE-PARA1ETER LATENT TRAIT MODEL. (U) MAR 80 H...TEST CHART VVNN lfl’ ,. [’ COD BAYESIAN ESTIMATION IN THE ONE-PARAMETER LATENT TRAIT MODEL 0 wtHAR IHARAN SWA I NATHAN AND JANICE A. GIFFORD Research...block numbef) latent trait theory Bayesain estimation 20. ABSTRACT (Continue on reveso aide If neceaar and identlfy by Nock mambe) ,-When several

  19. Estimation of Saxophone Control Parameters by Convex Optimization

    PubMed Central

    Wang, Cheng-i; Smyth, Tamara; Lipton, Zachary C.

    2015-01-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value. PMID:27754493

  20. Moving target parameter estimation of SAR after two looks cancellation

    NASA Astrophysics Data System (ADS)

    Gan, Rongbing; Wang, Jianguo; Gao, Xiang

    2005-11-01

    Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.

  1. Estimating soil hydraulic parameters from transient flow experiments in a centrifuge using parameter optimization technique

    USGS Publications Warehouse

    Simunek, J.; Nimmo, J.R.

    2005-01-01

    A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.

  2. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  3. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  4. Online vegetation parameter estimation using passive microwave remote sensing observations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...

  5. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  6. Loss of Information in Estimating Item Parameters in Incomplete Designs

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.; Verelst, Norman D.

    2006-01-01

    In this paper, the efficiency of conditional maximum likelihood (CML) and marginal maximum likelihood (MML) estimation of the item parameters of the Rasch model in incomplete designs is investigated. The use of the concept of F-information (Eggen, 2000) is generalized to incomplete testing designs. The scaled determinant of the F-information…

  7. Parameter estimation and infiltration tests at the repeat facility

    NASA Astrophysics Data System (ADS)

    Burns, P.; Armstrong, P.; Winn, B.

    1983-11-01

    Work performed in the reconfigurable passive evaluation analysis and test (REPEAT) facility is reviewed. The physical characteristics of the building and the instrumentation are described. Collected data are discussed. Treatment of parameter estimation ensures with example calculations. Infiltration instrumentation and tests are described. Flow visualization studies are discussed.

  8. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  9. Hybrid fault diagnosis of nonlinear systems using neural parameter estimators.

    PubMed

    Sobhani-Tehrani, E; Talebi, H A; Khorasani, K

    2014-02-01

    This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems taking advantage of both the system's mathematical model and the adaptive nonlinear approximation capability of computational intelligence techniques. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPEs) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FPs) that are indicators of faults in the system. Two NPE structures, series-parallel and parallel, are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. In contrast, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the two NPEs that originally assumes full state measurements for systems that have only partial state measurements. The proposed FTO is a neural state estimator that can estimate unmeasured states even in the presence of faults. The estimated and the measured states then comprise the inputs to the two proposed FDII schemes. Simulation results for FDII of reaction wheels of a three-axis stabilized satellite in the presence of disturbances and noise demonstrate the effectiveness of the proposed FDII solutions under partial state measurements.

  10. Matched filtering and parameter estimation of ringdown waveforms

    NASA Astrophysics Data System (ADS)

    Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglià, Marco

    2007-11-01

    Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to ˜103M⊙ out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole’s mass and spin. We estimate that more than ˜106 templates would be needed for a single-stage multimode search. Therefore, we recommend a “two-stage” search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.

  11. Matched filtering and parameter estimation of ringdown waveforms

    SciTech Connect

    Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglia, Marco

    2007-11-15

    Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to {approx}10{sup 3}M{sub {center_dot}} out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than {approx}10{sup 6} templates would be needed for a single-stage multimode search. Therefore, we recommend a ''two-stage'' search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.

  12. Parameter identifiability and estimation of HIV/AIDS dynamic models.

    PubMed

    Wu, Hulin; Zhu, Haihong; Miao, Hongyu; Perelson, Alan S

    2008-04-01

    We use a technique from engineering (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005) to investigate the algebraic identifiability of a popular three-dimensional HIV/AIDS dynamic model containing six unknown parameters. We find that not all six parameters in the model can be identified if only the viral load is measured, instead only four parameters and the product of two parameters (N and lambda) are identifiable. We introduce the concepts of an identification function and an identification equation and propose the multiple time point (MTP) method to form the identification function which is an alternative to the previously developed higher-order derivative (HOD) method (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005). We show that the newly proposed MTP method has advantages over the HOD method in the practical implementation. We also discuss the effect of the initial values of state variables on the identifiability of unknown parameters. We conclude that the initial values of output (observable) variables are part of the data that can be used to estimate the unknown parameters, but the identifiability of unknown parameters is not affected by these initial values if the exact initial values are measured with error. These noisy initial values only increase the estimation error of the unknown parameters. However, having the initial values of the latent (unobservable) state variables exactly known may help to identify more parameters. In order to validate the identifiability results, simulation studies are performed to estimate the unknown parameters and initial values from simulated noisy data. We also apply the proposed methods to a clinical data set

  13. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  14. Effect of noncircularity of experimental beam on CMB parameter estimation

    SciTech Connect

    Das, Santanu; Mitra, Sanjit; Paulson, Sonu Tabitha E-mail: sanjit@iucaa.ernet.in

    2015-03-01

    Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the PLANCK likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window functions obtained via Monte Carlo simulations starting from a fiducial spectrum, as implemented in PLANCK analyses for example, leads to only few percent of sigma deviation of the best-fit parameters. However, we notice more significant differences in the posterior distributions for some of the parameters, which would in turn lead to incorrect errorbars. These differences can be reduced, so that the errorbars match within few percent, by adding an iterative reanalysis step, where the beam window function would be recomputed using the best-fit spectrum estimated in the first step.

  15. Estimation of uncertain material parameters using modal test data

    SciTech Connect

    Veers, P.S.; Laird, D.L.; Carne, T.G.; Sagartz, M.J.

    1997-11-01

    Analytical models of wind turbine blades have many uncertainties, particularly with composite construction where material properties and cross-sectional dimension may not be known or precisely controllable. In this paper the authors demonstrate how modal testing can be used to estimate important material parameters and to update and improve a finite-element (FE) model of a prototype wind turbine blade. An example of prototype blade is used here to demonstrate how model parameters can be identified. The starting point is an FE model of the blade, using best estimates for the material constants. Frequencies of the lowest fourteen modes are used as the basis for comparisons between model predictions and test data. Natural frequencies and mode shapes calculated with the FE model are used in an optimal test design code to select instrumentation (accelerometer) and excitation locations that capture all the desired mode shapes. The FE model is also used to calculate sensitivities of the modal frequencies to each of the uncertain material parameters. These parameters are estimated, or updated, using a weighted least-squares technique to minimize the difference between test frequencies and predicted results. Updated material properties are determined for axial, transverse, and shear moduli in two separate regions of the blade cross section: in the central box, and in the leading and trailing panels. Static FE analyses are then conducted with the updated material parameters to determine changes in effective beam stiffness and buckling loads.

  16. Parameter estimation of an air-bearing suspended test table

    NASA Astrophysics Data System (ADS)

    Fu, Zhenxian; Lin, Yurong; Liu, Yang; Chen, Xinglin; Chen, Fang

    2015-02-01

    A parameter estimation approach is proposed for parameter determination of a 3-axis air-bearing suspended test table. The table is to provide a balanced and frictionless environment for spacecraft ground test. To balance the suspension, the mechanical parameters of the table, including its angular inertias and centroid deviation from its rotating center, have to be determined first. Then sliding masses on the table can be adjusted by stepper motors to relocate the centroid of the table to its rotating center. Using the angular momentum theorem and the coriolis theorem, dynamic equations are derived describing the rotation of the table under the influence of gravity imbalance torque and activating torques. To generate the actuating torques, use of momentum wheels is proposed, whose virtue is that no active control is required to the momentum wheels, which merely have to spin at constant rates, thus avoiding the singularity problem and the difficulty of precisely adjusting the output torques, issues associated with control moment gyros. The gyroscopic torques generated by the momentum wheels, as they are forced by the table to precess, are sufficient to activate the table for parameter estimation. Then least-square estimation is be employed to calculate the desired parameters. The effectiveness of the method is validated by simulation.

  17. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  18. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  19. Recursion in Aphasia

    ERIC Educational Resources Information Center

    Banreti, Zoltan

    2010-01-01

    This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about…

  20. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  1. Prediction and simulation errors in parameter estimation for nonlinear systems

    NASA Astrophysics Data System (ADS)

    Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.

    2010-11-01

    This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.

  2. Estimation of dynamic stability parameters from drop model flight tests

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Iliff, K. W.

    1981-01-01

    A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.

  3. Modal parameters estimation using ant colony optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  4. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  5. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    NASA Astrophysics Data System (ADS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  6. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  7. Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.

    PubMed

    Boonvisut, Pasu; Jackson, Russell; Cavuşoğlu, M Cenk

    2012-12-31

    Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic; because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.

  8. Estimation of Soft Tissue Mechanical Parameters from Robotic Manipulation Data.

    PubMed

    Boonvisut, Pasu; Cavuşoğlu, M Cenk

    2013-10-01

    Robotic motion planning algorithms used for task automation in robotic surgical systems rely on availability of accurate models of target soft tissue's deformation. Relying on generic tissue parameters in constructing the tissue deformation models is problematic because, biological tissues are known to have very large (inter- and intra-subject) variability. A priori mechanical characterization (e.g., uniaxial bench test) of the target tissues before a surgical procedure is also not usually practical. In this paper, a method for estimating mechanical parameters of soft tissue from sensory data collected during robotic surgical manipulation is presented. The method uses force data collected from a multiaxial force sensor mounted on the robotic manipulator, and tissue deformation data collected from a stereo camera system. The tissue parameters are then estimated using an inverse finite element method. The effects of measurement and modeling uncertainties on the proposed method are analyzed in simulation. The results of experimental evaluation of the method are also presented.

  9. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  10. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  11. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.

  12. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  13. Adaptable recursive binary entropy coding technique

    NASA Astrophysics Data System (ADS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  14. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  15. Anisotropic parameter estimation using velocity variation with offset analysis

    SciTech Connect

    Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A.

    2013-09-09

    Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.

  16. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  17. Parameter estimation method for blurred cell images from fluorescence microscope

    NASA Astrophysics Data System (ADS)

    He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin

    2016-10-01

    Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.

  18. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  19. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  20. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  1. Estimation of economic parameters of U.S. hydropower resources

    SciTech Connect

    Hall, Douglas G.; Hunt, Richard T.; Reeves, Kelly S.; Carroll, Greg R.

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  2. Estimation of atmospheric parameters from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.

    2016-05-01

    A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.

  3. Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.

    PubMed

    Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J

    2012-10-01

    Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.

  4. A fast schema for parameter estimation in diffusion kurtosis imaging

    PubMed Central

    Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Liu, Wei; Yang, Guang; Wu, Dongmei; Zhou, Yongdi; Peterson, Bradley S.; Xu, Dongrong

    2014-01-01

    Diffusion kurtosis imaging (DKI) is a new model in magnetic resonance imaging (MRI) characterizing restricted diffusion of water molecules in living tissues. We propose a method for fast estimation of the DKI parameters. These parameters –apparent diffusion coefficient (ADC) and apparent kurtosis coefficient (AKC) – are evaluated using an alternative iteration schema (AIS). This schema first roughly estimates a pair of ADC and AKC values from a subset of the DKI data acquired at 3 b-values. It then iteratively and alternately updates the ADC and AKC until they are converged. This approach employs the technique of linear least square fitting to minimize estimation error in each iteration. In addition to the common physical and biological constrains that set the upper and lower boundaries of the ADC and AKC values, we use a smoothing procedure to ensure that estimation is robust. Quantitative comparisons between our AIS methods and the conventional methods of unconstrained nonlinear least square (UNLS) using both synthetic and real data showed that our unconstrained AIS method can significantly accelerate the estimation procedure without compromising its accuracy, with the computational time for a DKI dataset successfully reduced to only one or two minutes. Moreover, the incorporation of the smoothing procedure using one of our AIS methods can significantly enhance the contrast of AKC maps and greatly improve the visibility of details in fine structures. PMID:25016957

  5. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  6. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  7. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  8. Observable Priors: Limiting Biases in Estimated Parameters for Incomplete Orbits

    NASA Astrophysics Data System (ADS)

    Kosmo, Kelly; Martinez, Gregory; Hees, Aurelien; Witzel, Gunther; Ghez, Andrea M.; Do, Tuan; Sitarski, Breann; Chu, Devin; Dehghanfar, Arezu

    2017-01-01

    Over twenty years of monitoring stellar orbits at the Galactic center has provided an unprecedented opportunity to study the physics and astrophysics of the supermassive black hole (SMBH) at the center of the Milky Way Galaxy. In order to constrain the mass of and distance to the black hole, and to evaluate its gravitational influence on orbiting bodies, we use Bayesian statistics to infer black hole and stellar orbital parameters from astrometric and radial velocity measurements of stars orbiting the central SMBH. Unfortunately, most of the short period stars in the Galactic center have periods much longer than our twenty year time baseline of observations, resulting in incomplete orbital phase coverage--potentially biasing fitted parameters. Using the Bayesian statistical framework, we evaluate biases in the black hole and orbital parameters of stars with varying phase coverage, using various prior models to fit the data. We present evidence that incomplete phase coverage of an orbit causes prior assumptions to bias statistical quantities, and propose a solution to reduce these biases for orbits with low phase coverage. The explored solution assumes uniformity in the observables rather than in the inferred model parameters, as is the current standard method of orbit fitting. Of the cases tested, priors that assume uniform astrometric and radial velocity observables reduce the biases in the estimated parameters. The proposed method will not only improve orbital estimates of stars orbiting the central SMBH, but can also be extended to other orbiting bodies with low phase coverage such as visual binaries and exoplanets.

  9. ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST

    SciTech Connect

    Carlin, Jeffrey L.; Newberg, Heidi Jo; Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong; Beers, Timothy C.; Chen, Li; Hou, Jinliang; Smith, Martin C.; Guhathakurta, Puragra; Lépine, Sébastien; Yanny, Brian; Zheng, Zheng

    2015-07-15

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.

  10. Systematic parameter estimation for PEM fuel cell models

    NASA Astrophysics Data System (ADS)

    Carnes, Brian; Djilali, Ned

    The problem of parameter estimation is considered for the case of mathematical models for polymer electrolyte membrane fuel cells (PEMFCs). An algorithm for nonlinear least squares constrained by partial differential equations is defined and applied to estimate effective membrane conductivity, exchange current densities and oxygen diffusion coefficients in a one-dimensional PEMFC model for transport in the principal direction of current flow. Experimental polarization curves are fitted for conventional and low current density PEMFCs. Use of adaptive mesh refinement is demonstrated to increase the computational efficiency.

  11. Parameter Estimation as a Problem in Statistical Thermodynamics

    NASA Astrophysics Data System (ADS)

    Earle, Keith A.; Schneider, David J.

    2011-03-01

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  12. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  13. Estimation of the parameters of ETAS models by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Lombardi, Anna Maria

    2015-02-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  14. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  15. Parameter estimation in X-ray astronomy using maximum likelihood

    NASA Technical Reports Server (NTRS)

    Wachter, K.; Leach, R.; Kellogg, E.

    1979-01-01

    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  16. Estimation of drying parameters in rotary dryers using differential evolution

    NASA Astrophysics Data System (ADS)

    Lobato, F. S.; Steffen, V., Jr.; Arruda, E. B.; Barrozo, M. A. S.

    2008-11-01

    Inverse problems arise from the necessity of obtaining parameters of theoretical models to simulate the behavior of the system for different operating conditions. Several heuristics that mimic different phenomena found in nature have been proposed for the solution of this kind of problem. In this work, the Differential Evolution Technique is used for the estimation of drying parameters in realistic rotary dryers, which is formulated as an optimization problem by using experimental data. Test case results demonstrate both the feasibility and the effectiveness of the proposed methodology.

  17. Recursions for statistical multiple alignment

    PubMed Central

    Hein, Jotun; Jensen, Jens Ledet; Pedersen, Christian N. S.

    2003-01-01

    Algorithms are presented that allow the calculation of the probability of a set of sequences related by a binary tree that have evolved according to the Thorne–Kishino–Felsenstein model for a fixed set of parameters. The algorithms are based on a Markov chain generating sequences and their alignment at nodes in a tree. Depending on whether the complete realization of this Markov chain is decomposed into the first transition and the rest of the realization or the last transition and the first part of the realization, two kinds of recursions are obtained that are computationally similar but probabilistically different. The running time of the algorithms is \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}O({\\Pi}_{i}^{d}=1~L_{i})\\end{equation*}\\end{document}, where Li is the length of the ith observed sequences and d is the number of sequences. An alternative recursion is also formulated that uses only a Markov chain involving the inner nodes of a tree. PMID:14657378

  18. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  19. Estimation of multidimensional precipitation parameters by areal estimates of oceanic rainfall

    NASA Technical Reports Server (NTRS)

    Valdes, J. B.; Nakamoto, S.; Shen, S. S. P.; North, G. R.

    1990-01-01

    The parameters of the multidimensional precipitation model proposed by Waymire et al. (1984) are estimated using the areal-averaged radar measurements of precipitation of the Global Atlantic Tropical Experiment (GATE) data set. The procedure followed was the fitting of the first- and second-order moments at different aggregation scales by nonlinear regression techniques. The numerical estimates of the parameters using different subsets of GATE information were reasonably stable, i.e., they were not affected by changes of the area-averaging size, temporal length of the records, and percentage of areal coverage of rainfall. This suggests that the estimation procedure is relatively robust and suitable to estimate the parameters of the multidimensional model in areas of sparse density of rain gages. The use of the space-time spectrum of rainfall to help in the determination of sampling errors due to intermittent visits of future space-borne low-altitude sensors of precipitation is also discussed.

  20. CosmoSIS: A System for MC Parameter Estimation

    SciTech Connect

    Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  1. On Using Exponential Parameter Estimators with an Adaptive Controller

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  2. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  3. Probabilistic estimation of the constitutive parameters of polymers

    NASA Astrophysics Data System (ADS)

    Foley, J. R.; Jordan, J. L.; Siviour, C. R.

    2012-08-01

    The Mulliken-Boyce constitutive model predicts the dynamic response of crystalline polymers as a function of strain rate and temperature. This paper describes the Mulliken-Boyce model-based estimation of the constitutive parameters in a Bayesian probabilistic framework. Experimental data from dynamic mechanical analysis and dynamic compression of PVC samples over a wide range of strain rates are analyzed. Both experimental uncertainty and natural variations in the material properties are simultaneously considered as independent and joint distributions; the posterior probability distributions are shown and compared with prior estimates of the material constitutive parameters. Additionally, particular statistical distributions are shown to be effective at capturing the rate and temperature dependence of internal phase transitions in DMA data.

  4. Bayesian parameter estimation for chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie

    2016-09-01

    The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.

  5. Estimation of Geodetic and Geodynamical Parameters with VieVS

    NASA Technical Reports Server (NTRS)

    Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald

    2010-01-01

    Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.

  6. Identification of vehicle parameters and estimation of vertical forces

    NASA Astrophysics Data System (ADS)

    Imine, H.; Fridman, L.; Madani, T.

    2015-12-01

    The aim of the present work is to estimate the vertical forces and to identify the unknown dynamic parameters of a vehicle using the sliding mode observers approach. The estimation of vertical forces needs a good knowledge of dynamic parameters such as damping coefficient, spring stiffness and unsprung masses, etc. In this paper, suspension stiffness and unsprung masses have been identified by the Least Square Method. Real-time tests have been carried out on an instrumented static vehicle, excited vertically by hydraulic jacks. The vehicle is equipped with different sensors in order to measure its dynamics. The measurements coming from these sensors have been considered as unknown inputs of the system. However, only the roll angle and the suspension deflection measurements have been used in order to perform the observer. Experimental results are presented and discussed to show the quality of the proposed approach.

  7. An Integrated Tool for Estimation of Material Model Parameters (PREPRINT)

    DTIC Science & Technology

    2010-04-01

    irrevocable worldwide license to use, modify, reproduce, release, perform, display, or disclose the work by or on behalf of the U.S. Government. 14 ... vf , and wf. The filtered v profiles are shown in Figure 4. For the plastic deformation data we found that the filtering could not correct the...wf near the top right corner. We need to use the vf data for our parameter estimation. Since the geometry and loading are symmetric in the FEM

  8. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  9. Estimation of discontinuous coefficients and boundary parameters for hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Lamm, P. K.; Murphy, K. A.

    1986-01-01

    The problem of estimating discontinuous coefficients, including locations of discontinuities, that occur in second order hyperbolic systems typical of those arising in I-D surface seismic problems is discussed. In addition, the problem of identifying unknown parameters that appear in boundary conditions for the system is treated. A spline-based approximation theory is presented, together with related convergence findings and representative numerical examples.

  10. Rapid estimation of high-parameter auditory-filter shapes.

    PubMed

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M

    2014-10-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials.

  11. Estimating Hydraulic Parameters When Poroelastic Effects Are Significant

    USGS Publications Warehouse

    Berg, S.J.; Hsieh, P.A.; Illman, W.A.

    2011-01-01

    For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.

  12. Hydraulic parameters estimation from well logging resistivity and geoelectrical measurements

    NASA Astrophysics Data System (ADS)

    Perdomo, S.; Ainchil, J. E.; Kruse, E.

    2014-06-01

    In this paper, a methodology is suggested for deriving hydraulic parameters, such as hydraulic conductivity or transmissivity combining classical hydrogeological data with geophysical measurements. Estimates values of transmissivity and conductivity, with this approach, can reduce uncertainties in numerical model calibration and improve data coverage, reducing time and cost of a hydrogeological investigation at a regional scale. The conventional estimation of hydrogeological parameters needs to be done by analyzing wells data or laboratory measurements. Furthermore, to make a regional survey many wells should be considered, and the location of each one plays an important role in the interpretation stage. For this reason, the use of geoelectrical methods arises as an effective complementary technique, especially in developing countries where it is necessary to optimize resources. By combining hydraulic parameters from pumping tests and electrical resistivity from well logging profiles, it was possible to adjust three empirical laws in a semi-confined alluvial aquifer in the northeast of the province of Buenos Aires (Argentina). These relations were also tested to be used with surficial geoelectrical data. The hydraulic conductivity and transmissivity estimated in porous material were according to expected values for the region (20 m/day; 457 m2/day), and are very consistent with previous results from other authors (25 m/day and 500 m2/day). The methodology described could be used with similar data sets and applied to other areas with similar hydrogeological conditions.

  13. Estimating cellular parameters through optimization procedures: elementary principles and applications

    PubMed Central

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880

  14. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  15. Estimation of longitudinal aircraft characteristics using parameter identification techniques

    NASA Technical Reports Server (NTRS)

    Wingrove, R. C.

    1978-01-01

    This study compares the results from different parameter identification methods used to determine longitudinal aircraft characteristics from flight data. In general, these comparisons have found that the estimated short-period dynamics (natural frequency, damping, transfer functions) are only weakly affected by the type of identification method, however, the estimated aerodynamic coefficients may be strongly affected by the type of identification method. The estimated values for aerodynamic coefficients were found to depend upon the type of math model and type of test data used with each of the identification methods. The use of fairly complete math models and the use of long data lengths, combining both steady and nonsteady motion, are shown to provide aerodynamic coefficient values that compare favorably with the results from other testing methods such as steady-state flight and full-scale wind-tunnel experiments.

  16. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  17. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  18. Automatic estimation of elasticity parameters in breast tissue

    NASA Astrophysics Data System (ADS)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  19. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; ...

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  20. CosmoSIS: A system for MC parameter estimation

    SciTech Connect

    Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could be used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.

  1. Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  2. Estimates of Running Ground Reaction Force Parameters from Motion Analysis.

    PubMed

    Pavei, Gaspare; Seminati, Elena; Storniolo, Jorge L L; Peyré-Tartaruga, Leonardo A

    2017-02-01

    We compared running mechanics parameters determined from ground reaction force (GRF) measurements with estimated forces obtained from double differentiation of kinematic (K) data from motion analysis in a broad spectrum of running speeds (1.94-5.56 m⋅s(-1)). Data were collected through a force-instrumented treadmill and compared at different sampling frequencies (900 and 300 Hz for GRF, 300 and 100 Hz for K). Vertical force peak, shape, and impulse were similar between K methods and GRF. Contact time, flight time, and vertical stiffness (kvert) obtained from K showed the same trend as GRF with differences < 5%, whereas leg stiffness (kleg) was not correctly computed by kinematics. The results revealed that the main vertical GRF parameters can be computed by the double differentiation of the body center of mass properly calculated by motion analysis. The present model provides an alternative accessible method for determining temporal and kinetic parameters of running without an instrumented treadmill.

  3. Orientational order parameter estimated from molecular polarizabilities - an optical study

    NASA Astrophysics Data System (ADS)

    Lalitha Kumari, J.; Datta Prasad, P. V.; Madhavi Latha, D.; Pisipati, V. G. K. M.

    2012-01-01

    An optical study of N-(p-n-alkyloxybenzylidene)-p-n-butyloxyanilines, nO.O4 compounds with the alkoxy chain number n = 1, 3, 6, 7, and 10 has been carried out by measuring the refractive indices using modified spectrometer and direct measurement of birefringence employing the Newton's rings method. Further, the molecular polarizability anisotropies are evaluated using Lippincott δ-function model, the molecular vibration method, Haller's extrapolation method, and scaling factor method. The molecular polarizabilities α e and α 0 are calculated using Vuk's isotropic and Neugebauer anisotropic local field models. The order parameter S is estimated by employing the molecular polarizability values determined from experimental refractive indices and density data and the polarizability anisotropy values. Further, the order parameter S is also obtained directly from the birefringence data. A comparison has been carried out among the order parameter obtained from different ways and the results are compared with the body of the data available in the literature.

  4. Recursive Rational Choice.

    DTIC Science & Technology

    1981-11-01

    402. Putnam , Hilary , [1973], "Recursive Functions and Hierarchies", American Mathematical Monthly, Vol.80, pp.6 8-8 6 . Rice, H.G., [1954...point the reader is also referred to Putnam [19731. The following are useful facts (cf. Kleene [1950]) that we will make reference to subsequently. The...Hierarchy (Cf. Putnam [19731, pp.77-80 and Hermes [1965] pp.192-202). The classification is made in terms of the structure of the definitions that

  5. Recursive scaled DCT

    NASA Astrophysics Data System (ADS)

    Hou, Hsieh-Sheng

    1991-12-01

    Among the various image data compression methods, the discrete cosine transform (DCT) has become the most popular in performing gray-scale image compression and decomposition. However, the computational burden in performing a DCT is heavy. For example, in a regular DCT, at least 11 multiplications are required for processing an 8 X 1 image block. The idea of the scaled-DCT is that more than half the multiplications in a regular DCT are unnecessary, because they can be formulated as scaling factors of the DCT coefficients, and these coefficients may be scaled back in the quantization process. A fast recursive algorithm for computing the scaled-DCT is presented in this paper. The formulations are derived based on practical considerations of applying the scaled-DCT algorithm to image data compression and decompression. These include the considerations of flexibility of processing different sizes of DCT blocks and the actual savings of the required number of arithmetic operations. Due to the recursive nature of this algorithm, a higher-order scaled-DCT can be obtained from two lower-order scaled DCTs. Thus, a scaled-DCT VLSI chip designed according to this algorithm may process different sizes of DCT under software control. To illustrate the unique properties of this recursive scaled-DCT algorithm, the one-dimensional formulations are presented with several examples exhibited in signal flow-graph forms.

  6. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  7. Sediment load estimation using statistical distributions with streamflow dependent parameters

    NASA Astrophysics Data System (ADS)

    Mailhot, A.; Rousseau, A. N.; Talbot, G.; Quilbé, R.

    2005-12-01

    The classical approaches to estimate sediment and chemical loads are all deterministic: averaging methods, ratio estimators, regression methods (rating curves) and planning level load estimation methods. However, none of these methods is satisfactory since they are often inaccurate and do not take into account nor quantify uncertainty. To fill this gap, statistical methods have to be investigated. This presentation proposes a new statistical method in which sediment concentration is assimilated to a random variable and is described by distribution functions. Three types of distributions are considered: Log-Normal, Gamma and Weibull distributions. Correlation between sediment concentrations and streamflows is integrated to the model by assuming that distribution parameters (mean and coefficient of variation) are related to streamflow using several different functional forms: exponential, quadratic and power law forms for the mean, constant and linear for the coefficient of variation. Parameter estimation is realized through maximization of the likelihood function. This approach is applied on a data set (1989 to 2004) from the Beaurivage River (Quebec, Canada) with weekly to monthly sampling for sediment concentration. A comparison of different models (selection of a distribution function with functional forms relating the mean and the coefficient of variation to streamflow) shows that the Log-Normal distribution with power law mean and coefficient of variation independent of streamflow provides the best result. When comparing annual load results with those obtained using deterministic methods, we observe that ratio estimators values are rarely within the [0.1, 0.9] quantile interval. For the 1997-2004 period, ratio estimator values are almost systematically smaller than the 0.1 quantile. This could presumably be due to the small number of sediment concentration samples for these years. This study suggests that, if deterministic methods such as the ratio estimator

  8. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  9. Basin structure of optimization based state and parameter estimation

    NASA Astrophysics Data System (ADS)

    Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D. I.; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan

    2015-05-01

    Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).

  10. Basin structure of optimization based state and parameter estimation.

    PubMed

    Schumann-Bischoff, Jan; Parlitz, Ulrich; Abarbanel, Henry D I; Kostuk, Mark; Rey, Daniel; Eldridge, Michael; Luther, Stefan

    2015-05-01

    Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment).

  11. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    PubMed

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design.

  12. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark

  13. Geomagnetic modeling by optimal recursive filtering

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Estes, R. H.

    1981-01-01

    The results of a preliminary study to determine the feasibility of using Kalman filter techniques for geomagnetic field modeling are given. Specifically, five separate field models were computed using observatory annual means, satellite, survey and airborne data for the years 1950 to 1976. Each of the individual field models used approximately five years of data. These five models were combined using a recursive information filter (a Kalman filter written in terms of information matrices rather than covariance matrices.) The resulting estimate of the geomagnetic field and its secular variation was propogated four years past the data to the time of the MAGSAT data. The accuracy with which this field model matched the MAGSAT data was evaluated by comparisons with predictions from other pre-MAGSAT field models. The field estimate obtained by recursive estimation was found to be superior to all other models.

  14. Estimation of genetic parameters for reproductive traits in Shall sheep.

    PubMed

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (P<0.05). Genetic parameters were estimated using restricted maximum likelihood procedure, under repeatability animal models. Direct heritability estimates were 0.02, 0.01, 0.47, 0.40, 0.15, and 0.03 for LSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep.

  15. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.

  16. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  17. Bayesian or Non-Bayesian: A Comparison Study of Item Parameter Estimation in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Gao, Furong; Chen, Lisue

    2005-01-01

    Through a large-scale simulation study, this article compares item parameter estimates obtained by the marginal maximum likelihood estimation (MMLE) and marginal Bayes modal estimation (MBME) procedures in the 3-parameter logistic model. The impact of different prior specifications on the MBME estimates is also investigated using carefully…

  18. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  19. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    PubMed Central

    Ojeda, Manoela; Ding, Dan

    2014-01-01

    Due to lower limb paralysis, individuals with spinal cord injury (SCI) rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE) and mean absolute percentage of error (MAPE) were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion. PMID:25105133

  20. Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2011-01-01

    This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.

  1. Plasma parameter estimation from multistatic, multibeam incoherent scatter data

    NASA Astrophysics Data System (ADS)

    Virtanen, I. I.; McKay-Bukowski, D.; Vierinen, J.; Aikio, A.; Fallows, R.; Roininen, L.

    2014-12-01

    Multistatic incoherent scatter radars are superior to monostatic facilities in the sense that multistatic systems can measure plasma parameters from multiple directions in volumes limited by beam dimensions and measurement range resolution. We propose a new incoherent scatter analysis technique that uses data from all receiver beams of a multistatic, multibeam radar system and produces, in addition to the plasma parameters typically measured with monostatic radars, estimates of ion velocity vectors and ion temperature anisotropies. Because the total scattered energy collected with remote receivers of a modern multistatic, multibeam radar system may even exceed the energy collected with the core transmit-and-receive site, the remote data improve the accuracy of all plasma parameter estimates, including those that could be measured with the core site alone. We apply the new multistatic analysis method for data measured by the tristatic European Incoherent Scatter VHF radar and the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA) multibeam receiver and show that a significant improvement in accuracy is obtained by adding KAIRA data in the multistatic analysis. We also demonstrate the development of a pronounced ion temperature anisotropy during high-speed ionospheric plasma flows in substorm conditions.

  2. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  3. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  4. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  5. Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control

    NASA Astrophysics Data System (ADS)

    Eshak, Peter B.

    Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to

  6. Estimating Regression Parameters in an Extended Proportional Odds Model

    PubMed Central

    Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping

    2012-01-01

    The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583

  7. Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.

    1998-01-01

    Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.

  8. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  9. Key parameters for precise lateral displacement estimation in ultrasound elastography.

    PubMed

    Luo, Jianwen; Konofagou, Elisa E

    2009-01-01

    Complementary to axial, lateral and elevational displacement and strain can provide important information on the mechanical properties of biological soft tissues. In this paper, the effects of key parameters on the lateral displacement estimation were investigated in simulations and validated in phantom experiments. The performance of the lateral estimator was evaluated by measuring its associated bias, and jitter (i.e., standard deviation). Simulation results showed that the bias and jitter undergo periodic variations depending on the lateral displacement, with a period equal to the pitch (i.e., adjacent element distance). The performance of the lateral estimation was improved, when a smaller pitch, or a larger beamwidth, was used. The effects of the pitch were found to be greater than those of the beamwidth. The results of the phantom experiments were shown in good agreement with the simulation findings, including the periodic variation of the performance with lateral displacement, effects of pitch and beamwidth. In conclusion, smaller pitches and wider beamwidths were found to be key in reducing the jitter error in the lateral displacement estimation. The same results also hold for tracking in the elevational direction.

  10. Error estimation and adaptivity for transport problems with uncertain parameters

    NASA Astrophysics Data System (ADS)

    Sahni, Onkar; Li, Jason; Oberai, Assad

    2016-11-01

    Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).

  11. Acoustical estimation of parameters of porous road pavement

    NASA Astrophysics Data System (ADS)

    Valyaev, V. Yu.; Shanin, A. V.

    2012-11-01

    In the simplest case, porous road pavement of a known thickness is described by such parameters as porosity, tortuosity, and flow resistance. The problem of estimating these parameters is investigated in this paper. An acoustic signal reflected by the pavement is used for this. It is shown that this problem can be solved by an experiment conducted in the time domain (i.e., the pulse response of the media is recorded). The incident sound wave is thrown at a grazing angle to the surface between the pavement and the air to improve penetration into the porous medium. The procedure of computing of the pulse response using the Morse-Ingard model is described in detail.

  12. Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2016-08-01

    Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).

  13. Earth-moon system: Dynamics and parameter estimation

    NASA Technical Reports Server (NTRS)

    Breedlove, W. J., Jr.

    1975-01-01

    A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.

  14. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  15. A Modified Rodrigues Parameter-based Nonlinear Observer Design for Spacecraft Gyroscope Parameters Estimation

    NASA Astrophysics Data System (ADS)

    Yong, Kilyuk; Jo, Sujang; Bang, Hyochoong

    This paper presents a modified Rodrigues parameter (MRP)-based nonlinear observer design to estimate bias, scale factor and misalignment of gyroscope measurements. A Lyapunov stability analysis is carried out for the nonlinear observer. Simulation is performed and results are presented illustrating the performance of the proposed nonlinear observer under the condition of persistent excitation maneuver. In addition, a comparison between the nonlinear observer and alignment Kalman filter (AKF) is made to highlight favorable features of the nonlinear observer.

  16. Recursive principal components analysis.

    PubMed

    Voegtlin, Thomas

    2005-10-01

    A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.

  17. Recursive Objects--An Object Oriented Presentation of Recursion

    ERIC Educational Resources Information Center

    Sher, David B.

    2004-01-01

    Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…

  18. Reduced order parameter estimation using quasilinearization and quadratic programming

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Putti, Mario; Yeh, William W.-G.

    2012-06-01

    The ability of a particular model to accurately predict how a system responds to forcing is predicated on various model parameters that must be appropriately identified. There are many algorithms whose purpose is to solve this inverse problem, which is often computationally intensive. In this study, we propose a new algorithm that significantly reduces the computational burden associated with parameter identification. The algorithm is an extension of the quasilinearization approach where the governing system of differential equations is linearized with respect to the parameters. The resulting inverse problem therefore becomes a linear regression or quadratic programming problem (QP) for minimizing the sum of squared residuals; the solution becomes an update on the parameter set. This process of linearization and regression is repeated until convergence takes place. This algorithm has not received much attention, as the QPs can become quite large, often infeasible for real-world systems. To alleviate this drawback, proper orthogonal decomposition is applied to reduce the size of the linearized model, thereby reducing the computational burden of solving each QP. In fact, this study shows that the snapshots need only be calculated once at the very beginning of the algorithm, after which no further calculations of the reduced-model subspace are required. The proposed algorithm therefore only requires one linearized full-model run per parameter at the first iteration followed by a series of reduced-order QPs. The method is applied to a groundwater model with about 30,000 computation nodes where as many as 15 zones of hydraulic conductivity are estimated.

  19. Establishing Long-Term Efficacy in Chronic Disease: Use of Recursive Partitioning and Propensity Score Adjustment to Estimate Outcome in MS

    PubMed Central

    Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker

    2011-01-01

    Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30

  20. Periodic orbits of hybrid systems and parameter estimation via AD.

    SciTech Connect

    Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard

    2004-07-01

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  1. Estimates of genetic parameters for growth traits in Kermani sheep.

    PubMed

    Bahreini Behzadi, M R; Shahroudi, F E; Van Vleck, L D

    2007-10-01

    Birth weight (BW), weaning weight (WW), 6-month weight (W6), 9-month weight (W9) and yearling weight (YW) of Kermani lambs were used to estimate genetic parameters. The data were collected from Shahrbabak Sheep Breeding Research Station in Iran during the period of 1993-1998. The fixed effects in the model were lambing year, sex, type of birth and age of dam. Number of days between birth date and the date of obtaining measurement of each record was used as a covariate. Estimates of (co)variance components and genetic parameters were obtained by restricted maximum likelihood, using single and two-trait animal models. Based on the most appropriate fitted model, direct and maternal heritabilities of BW, WW, W6, W9 and YW were estimated to be 0.10 +/- 0.06 and 0.27 +/- 0.04, 0.22 +/- 0.09 and 0.19 +/- 0.05, 0.09 +/- 0.06 and 0.25 +/- 0.04, 0.13 +/- 0.08 and 0.18 +/- 0.05, and 0.14 +/- 0.08 and 0.14 +/- 0.06 respectively. Direct and maternal genetic correlations between the lamb weights varied between 0.66 and 0.99, and 0.11 and 0.99. The results showed that the maternal influence on lamb weights decreased with age at measurement. Ignoring maternal effects in the model caused overestimation of direct heritability. Maternal effects are significant sources of variation for growth traits and ignoring maternal effects in the model would cause inaccurate genetic evaluation of lambs.

  2. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  3. Learn-as-you-go acceleration of cosmological parameter estimates

    SciTech Connect

    Aslanyan, Grigor; Easther, Richard; Price, Layne C. E-mail: r.easther@auckland.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  4. Automatic parameter estimation for atmospheric turbulence mitigation techniques

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Kelmelis, Eric

    2015-05-01

    Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm's underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried's parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.

  5. Estimating negative binomial parameters from occurrence data with detection times.

    PubMed

    Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub

    2016-11-01

    The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples.

  6. Parameter Estimation of Nonlinear Systems by Dynamic Cuckoo Search.

    PubMed

    Liao, Qixiang; Zhou, Shudao; Shi, Hanqing; Shi, Weilai

    2017-04-01

    In order to address with the problem of the traditional or improved cuckoo search (CS) algorithm, we propose a dynamic adaptive cuckoo search with crossover operator (DACS-CO) algorithm. Normally, the parameters of the CS algorithm are kept constant or adapted by empirical equation that may result in decreasing the efficiency of the algorithm. In order to solve the problem, a feedback control scheme of algorithm parameters is adopted in cuckoo search; Rechenberg's 1/5 criterion, combined with a learning strategy, is used to evaluate the evolution process. In addition, there are no information exchanges between individuals for cuckoo search algorithm. To promote the search progress and overcome premature convergence, the multiple-point random crossover operator is merged into the CS algorithm to exchange information between individuals and improve the diversification and intensification of the population. The performance of the proposed hybrid algorithm is investigated through different nonlinear systems, with the numerical results demonstrating that the method can estimate parameters accurately and efficiently. Finally, we compare the results with the standard CS algorithm, orthogonal learning cuckoo search algorithm (OLCS), an adaptive and simulated annealing operation with the cuckoo search algorithm (ACS-SA), a genetic algorithm (GA), a particle swarm optimization algorithm (PSO), and a genetic simulated annealing algorithm (GA-SA). Our simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  7. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    NASA Astrophysics Data System (ADS)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  8. Estimation of the poroelastic parameters of cortical bone.

    PubMed

    Smit, Theo H; Huyghe, Jacques M; Cowin, Stephen C

    2002-06-01

    Cortical bone has two systems of interconnected channels. The largest of these is the vascular porosity consisting of Haversian and Volkmann's canals, with a diameter of about 50 microm, which contains a.o. blood vessels and nerves. The smaller is the system consisting of the canaliculi and lacunae: the canaliculi are at the submicron level and house the protrusions of the osteocytes. When bone is differentially loaded, fluids within the solid matrix sustain a pressure gradient that drives a flow. It is generally assumed that the flow of extracellular fluid around osteocytes plays an important role not only in the nutrition of these cells, but also in the bone's mechanosensory system. The interaction between the deformation of the bone matrix and the flow of fluid can be modelled using Biot's theory of poroelasticity. However, due to the inhomogeneity of the bone matrix and the scale of the porosities, it is not possible to experimentally determine all the parameters that are needed for numerical implementation. The purpose of this paper is to derive these parameters using composite modelling and experimental data from literature. A full set of constants is estimated for a linear isotropic description of cortical bone as a two-level porous medium. Bone, however, has a wide variety of mechanical and structural properties; with the theoretical relationships described in this note, poroelastic parameters can be derived for other bone types using their specific experimental data sets.

  9. Parameter Estimation for a Hybrid Adaptive Flight Controller

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje

    2009-01-01

    This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.

  10. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the

  11. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  12. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, R.J.; Helsel, D.R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores.

  13. Joint data detection and parameter estimation: Fundamental limits and applications to optical fiber communications

    NASA Astrophysics Data System (ADS)

    Coskun, Orhan

    For ≥10-Gbit/s bit rates that are transmitted over ≥100 km, it is essential that chromatic The traditional method of sending a training signal to identify a channel, followed by data, may be viewed as a simple code for the unknown channel. Results in blind sequence detection suggest that performance similar to this traditional approach can be obtained without training. However, for short packets and/or time-recursive algorithms, significant error floors exist due to the existence of sequences that are indistinguishable without knowledge of the channel. In this work, we first reconsider training signal design in light of recent results in blind sequence detection. We design training codes which combine modulation and training. In order to design these codes, we find an expression for the pairwise error probability of the joint maximum likelihood (JML) channel and sequence estimator. This expression motivates a pairwise distance for the JML receiver based on principal angles between the range spaces of data matrices. The general code design problem (generalized sphere packing) is formulated as the clique problem associated with an unweighted, undirected graph. We provide optimal and heuristic algorithms for this clique problem. For short packets, we demonstrate that significant improvements are possible by jointly considering the design of the training, modulation, and receiver processing. As a practical blind data detection example, data reception in a fiber optical channel is investigated. To get the most out of the data detection methods, auxiliary algorithms such as sampling phase adjustment, decision threshold estimation algorithms are suggested. For the parallel implementation of detectors, semiring structure is introduced both for decision feedback equalizer (DFE) and maximum likelihood sequence detection (MLSD). Timing jitter is another parameter that affects the BER performance of the system. A data-aided clock recovery algorithm reduces the jitter of

  14. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    NASA Astrophysics Data System (ADS)

    El Gharamti, Mohamad; Valstar, Johan; Hoteit, Ibrahim

    2014-05-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system's parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Our results suggest that the proposed scheme allows a reduction of around 80% of the ensemble size as compared to the standard EnKF scheme.

  15. An adaptive hybrid EnKF-OI scheme for efficient state-parameter estimation of reactive contaminant transport models

    NASA Astrophysics Data System (ADS)

    Gharamti, M. E.; Valstar, J.; Hoteit, I.

    2014-09-01

    Reactive contaminant transport models are used by hydrologists to simulate and study the migration and fate of industrial waste in subsurface aquifers. Accurate transport modeling of such waste requires clear understanding of the system’s parameters, such as sorption and biodegradation. In this study, we present an efficient sequential data assimilation scheme that computes accurate estimates of aquifer contamination and spatially variable sorption coefficients. This assimilation scheme is based on a hybrid formulation of the ensemble Kalman filter (EnKF) and optimal interpolation (OI) in which solute concentration measurements are assimilated via a recursive dual estimation of sorption coefficients and contaminant state variables. This hybrid EnKF-OI scheme is used to mitigate background covariance limitations due to ensemble under-sampling and neglected model errors. Numerical experiments are conducted with a two-dimensional synthetic aquifer in which cobalt-60, a radioactive contaminant, is leached in a saturated heterogeneous clayey sandstone zone. Assimilation experiments are investigated under different settings and sources of model and observational errors. Simulation results demonstrate that the proposed hybrid EnKF-OI scheme successfully recovers both the contaminant and the sorption rate and reduces their uncertainties. Sensitivity analyses also suggest that the adaptive hybrid scheme remains effective with small ensembles, allowing to reduce the ensemble size by up to 80% with respect to the standard EnKF scheme.

  16. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  17. Modal parameters estimation in the Z-domain

    NASA Astrophysics Data System (ADS)

    Fasana, Alessandro

    2009-01-01

    This paper aims to explain in a clear, plain and detailed way a modal parameter estimation method in the frequency domain, or similarly in the Z-domain, valid for multi degrees-of-freedom systems. The technique is based on the rational fraction polynomials (RFP) representation of the frequency-response function (FRF) of a single input single output (SISO) system but is simply extended to multi input multi output (MIMO) and output only problems. A least-squares approach is adopted to take into account the information of all the FRFs but, when large data sets are used, the solution of the resulting system of algebraic linear equations can be a long and difficult task. A procedure to drastically reduce the problem dimensions is then adopted and fully explained; some practical hints are also given in order to achieve well-conditioned matrices. The method is validated through numerical and experimental examples.

  18. Enhancing parameter precision of optimal quantum estimation by quantum screening

    NASA Astrophysics Data System (ADS)

    Jiang, Huang; You-Neng, Guo; Qin, Xie

    2016-02-01

    We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).

  19. Simplified horn antenna parameter estimation using selective criteria

    SciTech Connect

    Ewing, P.D.

    1991-01-01

    An approximation can be used to avoid the complex mathematics and computation methods typically required for calculating the gain and radiation pattern of electromagnetic horn antenna. Because of the curvature of the antenna wave front, calculations using conventional techniques involve solving the Fresnel integrals and using computer-aided numerical integration. With this model, linear approximations give a reasonable estimate of the gain and radiation pattern using simple trigonometric functions, thereby allowing a hand calculator to replace the computer. Applying selected criteria, the case of the E-plane horn antenna was used to evaluate this technique. Results showed that the gain approximation holds for an antenna flare angle of less than 10{degree} for typical antenna dimensions, and the E field radiation pattern approximation holds until the antenna's phase error approaches 60{degree}, both within typical design parameters. This technique is a useful engineering tool. 4 refs., 11 figs.

  20. Optimal segmentation of pupillometric images for estimating pupil shape parameters.

    PubMed

    De Santis, A; Iacoviello, D

    2006-12-01

    The problem of determining the pupil morphological parameters from pupillometric data is considered. These characteristics are of great interest for non-invasive early diagnosis of the central nervous system response to environmental stimuli of different nature, in subjects suffering some typical diseases such as diabetes, Alzheimer disease, schizophrenia, drug and alcohol addiction. Pupil geometrical features such as diameter, area, centroid coordinates, are estimated by a procedure based on an image segmentation algorithm. It exploits the level set formulation of the variational problem related to the segmentation. A discrete set up of this problem that admits a unique optimal solution is proposed: an arbitrary initial curve is evolved towards the optimal segmentation boundary by a difference equation; therefore no numerical approximation schemes are needed, as required in the equivalent continuum formulation usually adopted in the relevant literature.

  1. Virtual parameter-estimation experiments in Bioprocess-Engineering education.

    PubMed

    Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes

    2006-05-01

    Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well.

  2. Parameter Estimation in Ultrasonic Measurements on Trabecular Bone

    NASA Astrophysics Data System (ADS)

    Marutyan, Karen R.; Anderson, Christian C.; Wear, Keith A.; Holland, Mark R.; Miller, James G.; Bretthorst, G. Larry

    2007-11-01

    Ultrasonic tissue characterization has shown promise for clinical diagnosis of diseased bone (e.g., osteoporosis) by establishing correlations between bone ultrasonic characteristics and the state of disease. Porous (trabecular) bone supports propagation of two compressional modes, a fast wave and a slow wave, each of which is characterized by an approximately linear-with-frequency attenuation coefficient and monotonically increasing with frequency phase velocity. Only a single wave, however, is generally apparent in the received signals. The ultrasonic parameters that govern propagation of this single wave appear to be causally inconsistent [1]. Specifically, the attenuation coefficient rises approximately linearly with frequency, but the phase velocity exhibits a decrease with frequency. These inconsistent results are obtained when the data are analyzed under the assumption that the received signal is composed of one wave. The inconsistency disappears if the data are analyzed under the assumption that the signal is composed of superposed fast and slow waves. In the current investigation, Bayesian probability theory is applied to estimate the ultrasonic characteristics underlying the propagation of the fast and slow wave from computer simulations. Our motivation is the assumption that identifying the intrinsic material properties of bone will provide more reliable estimates of bone quality and fracture risk than the apparent properties derived by analyzing the data using a one-mode model.

  3. Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations

    DTIC Science & Technology

    2013-07-01

    Simultaneous Position, Velocity, Attitude, Angular Rates, and Surface Parameter Estimation Using Astrometric and Photometric Observations...estimation is extended to include the various surface parameters associated with the bidirectional reflectance distribution function (BRDF... parameters are estimated simultaneously Keywords—estimation; data fusion; BRDF I. INTRODUCTION Wetterer and Jah [1] first demonstrated how brightness

  4. Forage quantity estimation from MERIS using band depth parameters

    NASA Astrophysics Data System (ADS)

    Ullah, Saleem; Yali, Si; Schlerf, Martin

    Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands

  5. Smoothing of, and Parameter Estimation from, Noisy Biophysical Recordings

    PubMed Central

    Huys, Quentin J. M.; Paninski, Liam

    2009-01-01

    Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo (“particle filtering”) methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise) are inferred automatically from noisy data via expectation-maximisation. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise. PMID:19424506

  6. Tradeoffs among watershed model calibration targets for parameter estimation

    NASA Astrophysics Data System (ADS)

    Price, Katie; Purucker, S. Thomas; Kraemer, Stephen R.; Babendreier, Justin E.

    2012-10-01

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation fit, while modified Nash-Sutcliffe efficiency (MNS) emphasizes lower flows, and the ratio of the simulated to observed standard deviations (RSD) prioritizes flow variability. We investigated tradeoffs of calibrating streamflow on three standard objective functions (NSE, MNS, and RSD), as well as a multiobjective function aggregating these three targets to simultaneously address a range of flow conditions, for calibration of the Soil and Water Assessment Tool (SWAT) daily streamflow simulations in two watersheds. A suite of objective functions was explored to select a minimally redundant set of metrics addressing a range of flow characteristics. After each pass of 2001 simulations, an iterative informal likelihood procedure was used to subset parameter ranges. The ranges from each best-fit simulation set were used for model validation. Values for optimized parameters vary among calibrations using different objective functions, which underscores the importance of linking modeling objectives to calibration target selection. The simulation set approach yielded validated models of similar quality as seen with a single best-fit parameter set, with the added benefit of uncertainty estimations. Our approach represents a novel compromise between equifinality-based approaches and Pareto optimization. Combining the simulation set approach with the multiobjective function was demonstrated to be a practicable and flexible approach for model calibration, which can be readily modified to suit modeling goals, and is not model or location specific.

  7. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  8. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  9. Recursivity in Lingua Cosmica

    NASA Astrophysics Data System (ADS)

    Ollongren, Alexander

    2011-02-01

    In a sequence of papers on the topic of message construction for interstellar communication by means of a cosmic language, the present author has discussed various significant requirements such a lingua should satisfy. The author's Lingua Cosmica is a (meta) system for annotating contents of possibly large-scale messages for ETI. LINCOS, based on formal constructive logic, was primarily designed for dealing with logic contents of messages but is also applicable for denoting structural properties of more general abstractions embedded in such messages. The present paper explains ways and means for achieving this for a special case: recursive entities. As usual two stages are involved: first the domain of discourse is enriched with suitable representations of the entities concerned, after which properties over them can be dealt with within the system itself. As a representative example the case of Russian dolls (Matrjoshka's) is discussed in some detail and relations with linguistic structures in natural languages are briefly exploited.

  10. Quantiles, parametric-select density estimation, and bi-information parameter estimators

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1982-01-01

    A quantile-based approach to statistical analysis and probability modeling of data is presented which formulates statistical inference problems as functional inference problems in which the parameters to be estimated are density functions. Density estimators can be non-parametric (computed independently of model identified) or parametric-select (approximated by finite parametric models that can provide standard models whose fit can be tested). Exponential models and autoregressive models are approximating densities which can be justified as maximum entropy for respectively the entropy of a probability density and the entropy of a quantile density. Applications of these ideas are outlined to the problems of modeling: (1) univariate data; (2) bivariate data and tests for independence; and (3) two samples and likelihood ratios. It is proposed that bi-information estimation of a density function can be developed by analogy to the problem of identification of regression models.

  11. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  12. Genetic parameter estimation of reproductive traits of Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Tan, Jian; Kong, Jie; Cao, Baoxiang; Luo, Kun; Liu, Ning; Meng, Xianhong; Xu, Shengyu; Guo, Zhaojia; Chen, Guoliang; Luan, Sheng

    2017-02-01

    In this study, the heritability, repeatability, phenotypic correlation, and genetic correlation of the reproductive and growth traits of L. vannamei were investigated and estimated. Eight traits of 385 shrimps from forty-two families, including the number of eggs (EN), number of nauplii (NN), egg diameter (ED), spawning frequency (SF), spawning success (SS), female body weight (BW) and body length (BL) at insemination, and condition factor (K), were measured,. A total of 519 spawning records including multiple spawning and 91 no spawning records were collected. The genetic parameters were estimated using an animal model, a multinomial logit model (for SF), and a sire-dam and probit model (for SS). Because there were repeated records, permanent environmental effects were included in the models. The heritability estimates for BW, BL, EN, NN, ED, SF, SS, and K were 0.49 ± 0.14, 0.51 ± 0.14, 0.12 ± 0.08, 0, 0.01 ± 0.04, 0.06 ± 0.06, 0.18 ± 0.07, and 0.10 ± 0.06, respectively. The genetic correlation was 0.99 ± 0.01 between BW and BL, 0.90 ± 0.19 between BW and EN, 0.22 ± 0.97 between BW and ED, -0.77 ± 1.14 between EN and ED, and -0.27 ± 0.36 between BW and K. The heritability of EN estimated without a covariate was 0.12 ± 0.08, and the genetic correlation was 0.90 ± 0.19 between BW and EN, indicating that improving BW may be used in selection programs to genetically improve the reproductive output of L. vannamei during the breeding. For EN, the data were also analyzed using body weight as a covariate (EN-2). The heritability of EN-2 was 0.03 ± 0.05, indicating that it is difficult to improve the reproductive output by genetic improvement. Furthermore, excessive pursuit of this selection is often at the expense of growth speed. Therefore, the selection of high-performance spawners using BW and SS may be an important strategy to improve nauplii production.

  13. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    , and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  14. Use of Dual-wavelength Radar for Snow Parameter Estimates

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    2005-01-01

    Use of dual-wavelength radar, with properly chosen wavelengths, will significantly lessen the ambiguities in the retrieval of microphysical properties of hydrometeors. In this paper, a dual-wavelength algorithm is described to estimate the characteristic parameters of the snow size distributions. An analysis of the computational results, made at X and Ka bands (T-39 airborne radar) and at S and X bands (CP-2 ground-based radar), indicates that valid estimates of the median volume diameter of snow particles, D(sub 0), should be possible if one of the two wavelengths of the radar operates in the non-Rayleigh scattering region. However, the accuracy may be affected to some extent if the shape factors of the Gamma function used for describing the particle distribution are chosen far from the true values or if cloud water attenuation is significant. To examine the validity and accuracy of the dual-wavelength radar algorithms, the algorithms are applied to the data taken from the Convective and Precipitation-Electrification Experiment (CaPE) in 1991, in which the dual-wavelength airborne radar was coordinated with in situ aircraft particle observations and ground-based radar measurements. Having carefully co-registered the data obtained from the different platforms, the airborne radar-derived size distributions are then compared with the in-situ measurements and ground-based radar. Good agreement is found for these comparisons despite the uncertainties resulting from mismatches of the sample volumes among the different sensors as well as spatial and temporal offsets.

  15. Developing population pharmacokinetic parameters for high-dose methotrexate therapy: implication of correlations among developed parameters for individual parameter estimation using the Bayesian least-squares method.

    PubMed

    Watanabe, Masahiro; Fukuoka, Noriyasu; Takeuchi, Toshiki; Yamaguchi, Kazunori; Motoki, Takahiro; Tanaka, Hiroaki; Kosaka, Shinji; Houchi, Hitoshi

    2014-01-01

    Bayesian estimation enables the individual pharmacokinetic parameters of the medication administrated to be estimated using only a few blood concentrations. Due to wide inter-individual variability in the pharmacokinetics of methotrexate (MTX), the concentration of MTX needs to be frequently determined during high-dose MTX therapy in order to prevent toxic adverse events. To apply the benefits of Bayesian estimation to cases treated with this therapy, we attempted to develop an estimation method using the Bayesian least-squares method, which is commonly used for therapeutic monitoring in a clinical setting. Because this method hypothesizes independency among population pharmacokinetic parameters, we focused on correlations among population pharmacokinetic parameters used to estimate individual parameters. A two-compartment model adequately described the observed concentration of MTX. The individual pharmacokinetic parameters of MTX were estimated in 57 cases using the maximum likelihood method. Among the available parameters accounting for a 2-compartment model, V1, k10, k12, and k21 were found to be the combination showing the weakest correlations, which indicated that this combination was best suited to the Bayesian least-squares method. Using this combination of population pharmacokinetic parameters, Bayesian estimation provided an accurate estimation of individual parameters. In addition, we demonstrated that the degree of correlation among population pharmacokinetic parameters used in the estimation affected the precision of the estimates. This result highlights the necessity of assessing correlations among the population pharmacokinetic parameters used in the Bayesian least-squares method.

  16. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    NASA Astrophysics Data System (ADS)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  17. Anaerobic biodegradability of fish remains: experimental investigation and parameter estimation.

    PubMed

    Donoso-Bravo, Andres; Bindels, Francoise; Gerin, Patrick A; Vande Wouwer, Alain

    2015-01-01

    The generation of organic waste associated with aquaculture fish processing has increased significantly in recent decades. The objective of this study is to evaluate the anaerobic biodegradability of several fish processing fractions, as well as water treatment sludge, for tilapia and sturgeon species cultured in recirculated aquaculture systems. After substrate characterization, the ultimate biodegradability and the hydrolytic rate were estimated by fitting a first-order kinetic model with the biogas production profiles. In general, the first-order model was able to reproduce the biogas profiles properly with a high correlation coefficient. In the case of tilapia, the skin/fin, viscera, head and flesh presented a high level of biodegradability, above 310 mLCH₄gCOD⁻¹, whereas the head and bones showed a low hydrolytic rate. For sturgeon, the results for all fractions were quite similar in terms of both parameters, although viscera presented the lowest values. Both the substrate characterization and the kinetic analysis of the anaerobic degradation may be used as design criteria for implementing anaerobic digestion in a recirculating aquaculture system.

  18. Comparative study on parameter estimation methods for attenuation relationships

    NASA Astrophysics Data System (ADS)

    Sedaghati, Farhad; Pezeshk, Shahram

    2016-12-01

    In this paper, the performance and advantages and disadvantages of various regression methods to derive coefficients of an attenuation relationship have been investigated. A database containing 350 records out of 85 earthquakes with moment magnitudes of 5-7.6 and Joyner-Boore distances up to 100 km in Europe and the Middle East has been considered. The functional form proposed by Ambraseys et al (2005 Bull. Earthq. Eng. 3 1-53) is selected to compare chosen regression methods. Statistical tests reveal that although the estimated parameters are different for each method, the overall results are very similar. In essence, the weighted least squares method and one-stage maximum likelihood perform better than the other considered regression methods. Moreover, using a blind weighting matrix or a weighting matrix related to the number of records would not yield in improving the performance of the results. Further, to obtain the true standard deviation, the pure error analysis is necessary. Assuming that the correlation between different records of a specific earthquake exists, the one-stage maximum likelihood considering the true variance acquired by the pure error analysis is the most preferred method to compute the coefficients of a ground motion predication equation.

  19. Cosmological parameter estimation with large scale structure observations

    SciTech Connect

    Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Julien.Lesgourgues@cern.ch

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, C{sub ℓ}(z{sub 1},z{sub 2}), calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard P(k) analysis with the new C{sub ℓ}(z{sub 1},z{sub 2}) method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the P(k) analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, C{sub 0}(z{sub 1},z{sub 2})

  20. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  1. A Normalized Direct Approach for Estimating the Parameters of the Normal Ogive Three-Parameter Model for Ability Tests.

    ERIC Educational Resources Information Center

    Gugel, John F.

    A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…

  2. Effect of Adjusting Pseudo-Guessing Parameter Estimates on Test Scaling When Item Parameter Drift Is Present

    ERIC Educational Resources Information Center

    Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K.

    2015-01-01

    In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…

  3. Recursive adaptive frame integration limited

    NASA Astrophysics Data System (ADS)

    Rafailov, Michael K.

    2006-05-01

    Recursive Frame Integration Limited was proposed as a way to improve frame integration performance and mitigate issues related to high data rate needed for conventional frame integration. The technique applies two thresholds - one tuned for optimum probability of detection, the other to manage required false alarm rate - and allows a non-linear integration process that, along with Signal-to-Noise Ratio (SNR) gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability. However, Recursive Frame Integration Limited may have performance issues when single frame SNR is really low. Recursive Adaptive Frame Integration Limited is proposed as a means to improve limited integration performance with really low single frame SNR. It combines the benefits of nonlinear recursive limited frame integration and adaptive thresholds with a kind of conventional frame integration.

  4. Sample size planning for longitudinal models: accuracy in parameter estimation for polynomial change parameters.

    PubMed

    Kelley, Ken; Rausch, Joseph R

    2011-12-01

    Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals underscore the importance of obtaining sufficiently accurate estimates of group differences in change. We derived expressions that allow researchers to plan sample size to achieve the desired confidence interval width for group differences in change for orthogonal polynomial change parameters. The approaches developed provide the expected confidence interval width to be sufficiently narrow, with an extension that allows some specified degree of assurance (e.g., 99%) that the confidence interval will be sufficiently narrow. We make computer routines freely available, so that the methods developed can be used by researchers immediately.

  5. Recursive implementations of temporal filters for image motion computation.

    PubMed

    Clifford, C W; Langley, K

    2000-05-01

    Efficient algorithms for image motion computation are important for computer vision applications and the modelling of biological vision systems. Intensity-based image motion computation proceeds in two stages: the convolution of linear spatiotemporal filter kernels with the image sequence, followed by the non-linear combination of the filter outputs. If the spatiotemporal extent of the filter kernels is large, then the convolution stage can be very intensive computationally. One effective means of reducing the storage required and computation involved in implementing the temporal convolutions is the introduction of recursive filtering. Non-recursive methods require the number of frames of the image sequence stored at any given time to be equal to the temporal extent of the slowest temporal filter. In contrast, recursive methods encode recent stimulus history implicitly in the values of a small number of variables updated through a series of feedback equations. Recursive filtering reduces the number of values stored in memory during convolution and the number of mathematical operations involved in computing the filters' outputs. This paper extends previous recursive implementations of gradient- and correlation-based motion analysis algorithms [Fleet DJ, Langley K (1995) IEEE PAMI 17: 61-67; Clifford CWG, Ibbotson MR, Langley K (1997) Vis Neurosci 14: 741-749], describing a recursive implementation of causal band-pass temporal filters suitable for use in energy- and phase-based algorithms for image motion computation. It is shown that the filters' temporal frequency tuning curves fit psychophysical estimates of the temporal properties of human visual filters.

  6. Hopf algebras and topological recursion

    NASA Astrophysics Data System (ADS)

    Esteves, João N.

    2015-11-01

    We consider a model for topological recursion based on the Hopf algebra of planar binary trees defined by Loday and Ronco (1998 Adv. Math. 139 293-309 We show that extending this Hopf algebra by identifying pairs of nearest neighbor leaves, and thus producing graphs with loops, we obtain the full recursion formula discovered by Eynard and Orantin (2007 Commun. Number Theory Phys. 1 347-452).

  7. Recursive Mahalanobis separability measure for gene subset selection.

    PubMed

    Mao, K Z; Tang, Wenyin

    2011-01-01

    Mahalanobis class separability measure provides an effective evaluation of the discriminative power of a feature subset, and is widely used in feature selection. However, this measure is computationally intensive or even prohibitive when it is applied to gene expression data. In this study, a recursive approach to Mahalanobis measure evaluation is proposed, with the goal of reducing computational overhead. Instead of evaluating Mahalanobis measure directly in high-dimensional space, the recursive approach evaluates the measure through successive evaluations in 2D space. Because of its recursive nature, this approach is extremely efficient when it is combined with a forward search procedure. In addition, it is noted that gene subsets selected by Mahalanobis measure tend to overfit training data and generalize unsatisfactorily on unseen test data, due to small sample size in gene expression problems. To alleviate the overfitting problem, a regularized recursive Mahalanobis measure is proposed in this study, and guidelines on determination of regularization parameters are provided. Experimental studies on five gene expression problems show that the regularized recursive Mahalanobis measure substantially outperforms the nonregularized Mahalanobis measures and the benchmark recursive feature elimination (RFE) algorithm in all five problems.

  8. Estimation of uranium migration parameters in sandstone aquifers.

    PubMed

    Malov, A I

    2016-03-01

    The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

  9. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  10. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  11. Multiple-hit parameter estimation in monolithic detectors

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Miyaoka, Robert S.; Lewellen, Tom K.

    2012-01-01

    We examine a maximum-a-priori (MAP) method for estimating the primary interaction position of gamma rays with multiple-interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square LSO block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation-camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a conventional ML estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photo peak events and positioned without loss of resolution by a 1-or-2-hit estimator. PMID:23238325

  12. Comparison of Estimation Techniques for the Four Parameter Beta Distribution.

    DTIC Science & Technology

    1981-12-01

    estimators. Mendenhall and Scheaffer define an estimator as ##a rule that tells us how to calculate an estimate based on the measurements contained...Dynamics Laboratory, October 1976. 19. Mendenhall, William and Richard L. Scheaffer . Mathema- tical Statistics with ApflAlicions. North Scituate

  13. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  14. A Novel Parameter Estimation Method for Boltzmann Machines.

    PubMed

    Takenouchi, Takashi

    2015-11-01

    We propose a novel estimator for a specific class of probabilistic models on discrete spaces such as the Boltzmann machine. The proposed estimator is derived from minimization of a convex risk function and can be constructed without calculating the normalization constant, whose computational cost is exponential order. We investigate statistical properties of the proposed estimator such as consistency and asymptotic normality in the framework of the estimating function. Small experiments show that the proposed estimator can attain comparable performance to the maximum likelihood expectation at a much lower computational cost and is applicable to high-dimensional data.

  15. Alternative parameterizations of the multiple-trait random regression model for milk yield and somatic cell score via recursive links between phenotypes.

    PubMed

    Jamrozik, J; Schaeffer, L R

    2011-08-01

    Multiple-trait random regression models with recursive phenotypic link from somatic cell score (SCS) to milk yield on the same test day and with different restrictions on co-variances between these traits were fitted to the first-lactation Canadian Holstein data. Bayesian methods with Gibbs sampling were used to derive inferences about parameters for all models. Bayes factor indicated that the recursive model with uncorrelated environmental effects between traits was the most plausible specification in describing the data. Goodness of fit in terms of a within-trait weighted mean square error and correlation between observed and predicted data was the same for all parameterizations. All recursive models estimated similar negative causal effects from SCS to milk yield (up to -0.4 in 46-115 days in milk in lactation). Estimates of heritabilities, genetic and environmental correlations for the first two regression coefficients (overall level of a trait and lactation persistency) within both traits were similar among models. Genetic correlations between milk and SCS were dependent on the restrictions on genetic co-variances for these traits. Recursive model with uncorrelated system genetic effects between milk and SCS gave estimates of genetic correlations of the opposite sign compared with a regular multiple-trait model. Phenotypic recursion between milk and SCS seemed, however, to be the only source of environmental correlations between these two traits. Rankings of sires for total milk yield in lactation, average daily SCS and persistency for both traits were similar among models. Multiple-trait model with recursive links between milk and SCS and uncorrelated random environmental effects could be an attractive alternative for a regular multiple-trait model in terms of model parsimony and accuracy.

  16. Multiple-Hit Parameter Estimation in Monolithic Detectors

    PubMed Central

    Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.

    2014-01-01

    We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231

  17. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  18. Obtaining and estimating kinetic parameters from the literature.

    PubMed

    Neves, Susana R

    2011-09-13

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on strategies for the development of mathematical models. Many biological processes can be represented mathematically as systems of ordinary differential equations (ODEs). Simulations with these mathematical models can provide mechanistic insight into the underlying biology of the system. A prerequisite for running simulations, however, is the identification of kinetic parameters that correspond closely with the biological reality. This lecture presents an overview of the steps required for the development of kinetic ODE models and describes experimental methods that can yield kinetic parameters and concentrations of reactants, which are essential for the development of kinetic models. Strategies are provided to extract necessary parameters from published data. The homework assignment requires students to find parameters appropriate for a well-studied biological regulatory system, convert these parameters into appropriate units, and interpret how different values of these parameters may lead to different biological behaviors.

  19. A systematic review of lumped-parameter equivalent circuit models for real-time estimation of lithium-ion battery states

    NASA Astrophysics Data System (ADS)

    Nejad, S.; Gladwin, D. T.; Stone, D. A.

    2016-06-01

    This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.

  20. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    SciTech Connect

    Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  1. Estimating winter wheat phenological parameters: Implications for crop modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  2. Astrophysical Prior Information and Gravitational-wave Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Pankow, Chris; Sampson, Laura; Perri, Leah; Chase, Eve; Coughlin, Scott; Zevin, Michael; Kalogera, Vassiliki

    2017-01-01

    The detection of electromagnetic counterparts to gravitational waves (GWs) has great promise for the investigation of many scientific questions. While it is well known that certain orientation parameters can reduce uncertainty in other related parameters, it was also hoped that the detection of an electromagnetic signal in conjunction with a GW could augment the measurement precision of the mass and spin from the gravitational signal itself. That is, knowledge of the sky location, inclination, and redshift of a binary could break degeneracies between these extrinsic, coordinate-dependent parameters and the physical parameters that are intrinsic to the binary. In this paper, we investigate this issue by assuming perfect knowledge of extrinsic parameters, and assessing the maximal impact of this knowledge on our ability to extract intrinsic parameters. We recover similar gains in extrinsic recovery to earlier work; however, we find only modest improvements in a few intrinsic parameters—namely the primary component’s spin. We thus conclude that, even in the best case, the use of additional information from electromagnetic observations does not improve the measurement of the intrinsic parameters significantly.

  3. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  4. The estimation of time-invariant parameters of noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Khalil, Mohammad; Sarkar, Abhijit; Adhikari, Sondipon; Poirel, Dominique

    2015-05-01

    The inverse problem of estimating time-invariant (static) parameters of a nonlinear system exhibiting noisy oscillation is considered in this paper. Firstly, a Markov Chain Monte Carlo (MCMC) simulation is used for the time-invariant parameter estimation which exploits a non-Gaussian filter, namely the Ensemble Kalman Filter (EnKF) for state estimation required to compute the likelihood function. Secondly, a recently proposed Particle Filter (PF) (that uses the EnKF for its proposal density for the state estimation) has been adapted for combined state and parameter estimation. Numerical illustrations highlight the strengths and limitations of the MCMC, EnKF and PF algorithms for time-invariant parameter estimation. For low measurement noise and dense measurement data, the performances of the MCMC, EnKF and PF algorithms are comparable. For high measurement noise and sparse observational data, the EnKF fails to provide accurate parameter estimates. Hence the adapted PF algorithm becomes necessary in order to obtain parameter estimates comparable in accuracy to the MCMC simulation with EnKF. It highlights the fact that the augmented state space model for the combined state and parameter estimation contains stronger nonlinearity than the original state space model. Hence the EnKF effectively handles the state estimation of the original state space model, but it fails for the combined state and parameter estimation using the augmented system. The effectiveness of the EnKF for the state estimation is therefore leveraged in the MCMC simulation for the time-invariant parameter estimation. In order to obtain accurate parameter estimates using the augmented system, the adapted PF becomes necessary to match the parameter estimates obtained using the MCMC simulation complemented by EnKF for likelihood function computation.

  5. Comparison of Forest Parameter Estimation Techniques Using SAR Data

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Zyl, J. van

    2001-01-01

    It is important to monitor forests in order to understand the impacts of global climate changes on terrestrial ecosystems. To characterize the forest changes, it is useful to parameterize a forest using several parameters.

  6. Estimation of beech pyrolysis kinetic parameters by Shuffled Complex Evolution.

    PubMed

    Ding, Yanming; Wang, Changjian; Chaos, Marcos; Chen, Ruiyu; Lu, Shouxiang

    2016-01-01

    The pyrolysis kinetics of a typical biomass energy feedstock, beech, was investigated based on thermogravimetric analysis over a wide heating rate range from 5K/min to 80K/min. A three-component (corresponding to hemicellulose, cellulose and lignin) parallel decomposition reaction scheme was applied to describe the experimental data. The resulting kinetic reaction model was coupled to an evolutionary optimization algorithm (Shuffled Complex Evolution, SCE) to obtain model parameters. To the authors' knowledge, this is the first study in which SCE has been used in the context of thermogravimetry. The kinetic parameters were simultaneously optimized against data for 10, 20 and 60K/min heating rates, providing excellent fits to experimental data. Furthermore, it was shown that the optimized parameters were applicable to heating rates (5 and 80K/min) beyond those used to generate them. Finally, the predicted results based on optimized parameters were contrasted with those based on the literature.

  7. State and parameter estimation for canonic models of neural oscillators.

    PubMed

    Tyukin, Ivan; Steur, Erik; Nijmeijer, Henk; Fairhurst, David; Song, Inseon; Semyanov, Alexey; Van Leeuwen, Cees

    2010-06-01

    We consider the problem of how to recover the state and parameter values of typical model neurons, such as Hindmarsh-Rose, FitzHugh-Nagumo, Morris-Lecar, from in-vitro measurements of membrane potentials. In control theory, in terms of observer design, model neurons qualify as locally observable. However, unlike most models traditionally addressed in control theory, no parameter-independent diffeomorphism exists, such that the original model equations can be transformed into adaptive canonic observer form. For a large class of model neurons, however, state and parameter reconstruction is possible nevertheless. We propose a method which, subject to mild conditions on the richness of the measured signal, allows model parameters and state variables to be reconstructed up to an equivalence class.

  8. Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants

    SciTech Connect

    Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.

    2011-01-01

    The head disk interface in a hard disk drive can be considered to be one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models. In this paper, we investigate beyond molecular level and perform ab initio calculations to obtain the force field parameters. Intramolecular force field parameters for Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  9. Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants

    SciTech Connect

    Smith, R; Chung, P S; Steckel, J A; Jhon, M S; Biegler, L T

    2011-01-01

    The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraolwere evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  10. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  11. The Problem of Bias in Person Parameter Estimation in Adaptive Testing

    ERIC Educational Resources Information Center

    Doebler, Anna

    2012-01-01

    It is shown that deviations of estimated from true values of item difficulty parameters, caused for example by item calibration errors, the neglect of randomness of item difficulty parameters, testlet effects, or rule-based item generation, can lead to systematic bias in point estimation of person parameters in the context of adaptive testing.…

  12. Potential Improvements for HEC-HMS Automated Parameter Estimation

    DTIC Science & Technology

    2006-08-01

    dendritic watershed system subject to mete- orological forcing. The graphical user interface enables seamless move - ment between the database, model...multiple dimensions. It is sometimes referred to as an “ amoeba ” method because it works by set- ting up rules that allow a cloud of points in parameter...parameter move - ment, and/or increasing changes in model outputs on account of this movement). An additional factor that contributes to the success

  13. Evaluation of Personnel Parameters in Software Cost Estimating Models

    DTIC Science & Technology

    2007-11-02

    ACAP , 1.42; all other parameters would be set to the nominal value of one. The effort multiplier will be a fixed value if the model uses linear...data. The calculated multiplier values were the 45 Table 8. COSTAR Trials For Multiplier Calculation Run ACAP PCAP PCON APEX PLEX LTEX Effort...impact. Table 9. COCOMO II Personnel Parameters Effort Multipliers Driver Lowest Nominal Highest Analyst Capability ( ACAP ) 1.42 1.00 0.71

  14. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  15. Estimating Building Simulation Parameters via Bayesian Structure Learning

    SciTech Connect

    Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards

    2013-01-01

    Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.

  16. Stellar atmospheric parameter estimation using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  17. Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements

    SciTech Connect

    Olama, Mohammed M; Djouadi, Seddik M; Li, Yanyan

    2008-07-01

    This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.

  18. Phase noise effects on turbulent weather radar spectrum parameter estimation

    NASA Technical Reports Server (NTRS)

    Lee, Jonggil; Baxa, Ernest G., Jr.

    1990-01-01

    Accurate weather spectrum moment estimation is important in the use of weather radar for hazardous windshear detection. The effect of the stable local oscillator (STALO) instability (jitter) on the spectrum moment estimation algorithm is investigated. Uncertainty in the stable local oscillator will affect both the transmitted signal and the received signal since the STALO provides transmitted and reference carriers. The proposed approach models STALO phase jitter as it affects the complex autocorrelation of the radar return. The results can therefore by interpreted in terms of any source of system phase jitter for which the model is appropriate and, in particular, may be considered as a cumulative effect of all radar system sources.

  19. Estimation of Stiffness Parameter on the Common Carotid Artery

    NASA Astrophysics Data System (ADS)

    Koya, Yoshiharu; Mizoshiri, Isao; Matsui, Kiyoaki; Nakamura, Takashi

    The arteriosclerosis is on the increase with an aging or change of our living environment. For that reason, diagnosis of the common carotid artery using echocardiogram is doing to take precautions carebropathy. Up to the present, several methods to measure stiffness parameter of the carotid artery have been proposed. However, they have analyzed at the only one point of common carotid artery. In this paper, we propose the method of analysis extended over a wide area of common carotid artery. In order to measure stiffness parameter of common carotid artery from echocardiogram, it is required to detect two border curves which are boundaries between vessel wall and blood. The method is composed of two steps. The first step is the detection of border curves, and the second step is the calculation of stiffness parameter using diameter of common carotid artery. Experimental results show the validity of the proposed method.

  20. Bayesian methods for parameter estimation in effective field theories

    SciTech Connect

    Schindler, M.R. Phillips, D.R.

    2009-03-15

    We demonstrate and explicate Bayesian methods for fitting the parameters that encode the impact of short-distance physics on observables in effective field theories (EFTs). We use Bayes' theorem together with the principle of maximum entropy to account for the prior information that these parameters should be natural, i.e., O(1) in appropriate units. Marginalization can then be employed to integrate the resulting probability density function (pdf) over the EFT parameters that are not of specific interest in the fit. We also explore marginalization over the order of the EFT calculation, M, and over the variable, R, that encodes the inherent ambiguity in the notion that these parameters are O(1). This results in a very general formula for the pdf of the EFT parameters of interest given a data set, D. We use this formula and the simpler 'augmented {chi}{sup 2}' in a toy problem for which we generate pseudo-data. These Bayesian methods, when used in combination with the 'naturalness prior', facilitate reliable extractions of EFT parameters in cases where {chi}{sup 2} methods are ambiguous at best. We also examine the problem of extracting the nucleon mass in the chiral limit, M{sub 0}, and the nucleon sigma term, from pseudo-data on the nucleon mass as a function of the pion mass. We find that Bayesian techniques can provide reliable information on M{sub 0}, even if some of the data points used for the extraction lie outside the region of applicability of the EFT.

  1. Proper estimation of hydrological parameters from flood forecasting aspects

    NASA Astrophysics Data System (ADS)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Yanami, Hitoshi; Anai, Hirokazu

    2016-04-01

    The hydrological parameters of a flood forecasting model are normally calibrated based on an entire hydrograph of past flood events by means of an error assessment function such as mean square error and relative error. However, the specific parts of a hydrograph, i.e., maximum discharge and rising parts, are particularly important for practical flood forecasting in the sense that underestimation may lead to a more dangerous situation due to delay in flood prevention and evacuation activities. We conducted numerical experiments to find the most proper parameter set for practical flood forecasting without underestimation in order to develop an error assessment method for calibration appropriate for flood forecasting. A distributed hydrological model developed in Public Works Research Institute (PWRI) in Japan was applied to fifteen past floods in the Gokase River basin of 1,820km2 in Japan. The model with gridded two-layer tanks for the entire target river basin included hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which were set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and Digital Soil Map of the World (DSMW), were employed as input data for elevation, land use and soil type. The values of fourteen types of parameters were evenly sampled with 10,001 patterns of parameter sets determined by the Latin Hypercube Sampling within the search range of each parameter. Although the best reproduced case showed a high Nash-Sutcliffe Efficiency of 0.9 for all flood events, the maximum discharge was underestimated in many flood cases. Therefore, two conditions, which were non-underestimation in the maximum discharge and rising parts of a hydrograph, were added in calibration as the flood forecasting aptitudes. The cases with non-underestimation in the maximum discharge and rising parts of the hydrograph also showed a high Nash-Sutcliffe Efficiency of 0.9 except two flood cases

  2. Earth-Moon system: Dynamics and parameter estimation

    NASA Technical Reports Server (NTRS)

    Breedlove, W. J., Jr.

    1979-01-01

    The following topics are discussed: (1) the Unified Model of Lunar Translation/Rotation (UMLTR); (2) the effect of figure-figure interactions on lunar physical librations; (3) the effect of translational-rotational coupling on the lunar orbit; and(4) an error analysis for estimating lunar inertias from LURE (Lunar Laser Ranging Experiment) data.

  3. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    EPA Science Inventory

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  4. Estimating the Parameters of the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1979-01-01

    For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…

  5. A Simplified Estimation of Latent State--Trait Parameters

    ERIC Educational Resources Information Center

    Hagemann, Dirk; Meyerhoff, David

    2008-01-01

    The latent state-trait (LST) theory is an extension of the classical test theory that allows one to decompose a test score into a true trait, a true state residual, and an error component. For practical applications, the variances of these latent variables may be estimated with standard methods of structural equation modeling (SEM). These…

  6. Estimation of Item Parameters and the GEM Algorithm.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    The models and procedures discussed in this paper are related to those presented in Bock and Aitkin (1981), where they considered the 2-parameter probit model and approximated a normally distributed prior distribution of abilities by a finite and discrete distribution. One purpose of this paper is to clarify the nature of the general EM (GEM)…

  7. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  8. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  9. Improving Estimates Of Phase Parameters When Amplitude Fluctuates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1989-01-01

    Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.

  10. Estimation of Launch Vehicle Performance Parameters from an Orbiting Sensor

    DTIC Science & Technology

    1981-12-01

    docur euii ha b oon cipp ov W8 2 0 kwr pub&c rolvao =%i ca’%; iw-8&ndW’"m82 02 18 184’, ’ " : /:3 ).2,.0 WW F 9 1982 ESTIMATION OF LAUNCH VEHICLE...Eight-State Filter Model .................... 36 Derivation of Equations ..................... 36 Filter Checkout and Performance ............. 39 VI...48 Appendix B: Derivation of H Matrix ................... 51 Appendix C: Description of Preliminary Truth Model for Data Generation

  11. Modified cross-validation as a method for estimating parameter

    NASA Astrophysics Data System (ADS)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  12. Marker-Based Estimation of Genetic Parameters in Genomics

    PubMed Central

    Hu, Zhiqiu; Yang, Rong-Cai

    2014-01-01

    Linear mixed model (LMM) analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS) as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing ‘big’ genomic data sets. PMID:25025305

  13. How Learning Logic Programming Affects Recursion Comprehension

    ERIC Educational Resources Information Center

    Haberman, Bruria

    2004-01-01

    Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural…

  14. Well-poised generation of Apery-like recursions

    NASA Astrophysics Data System (ADS)

    Zudilin, Wadim

    2005-06-01

    The idea to use classical hypergeometric series and, in particular, well-poised hypergeometric series in diophantine problems of the values of the polylogarithms has led to several novelties in number theory and neighbouring areas of mathematics. Here, we present a systematic approach to derive second-order polynomial recursions for approximations to some values of the Lerch zeta function, depending on the fixed (but not necessarily real) parameter [alpha] satisfying the condition Re([alpha]). Substituting [alpha]=0 into the resulting recurrence equations produces the famous recursions for rational approximations to [zeta](2), [zeta](3) due to Apery, as well as the known recursion for rational approximations to [zeta](4). Multiple integral representations for solutions of the constructed recurrences are also given.

  15. The Least-Squares Estimation of Latent Trait Variables.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi

    This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…

  16. Estimability of geodetic parameters from space VLBI observables

    NASA Technical Reports Server (NTRS)

    Adam, Jozsef

    1990-01-01

    The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.

  17. Kappa (κ): estimates, origins, and correlation to site characterisation parameters

    NASA Astrophysics Data System (ADS)

    Ktenidou, O. J.; Cotton, F.; Drouet, S.; Theodoulidis, N.; Chaljub, E. O.

    2012-12-01

    Knowledge of the acceleration spectral shape is important for various applications in engineering seismology. At high frequencies spectral amplitude drops rapidly. Anderson and Hough (1984) modelled this drop with the spectral decay factor κ, observing that, above a certain frequency, the acceleration spectrum decreases linearly in lin-log space. Thirty years later, and though the debate as to its source, path and site components is still on, κ constitutes a basic input parameter for the generation of stochastic ground motion and the calibration and adjustment of GMPEs. We study κ in the EUROSEISTEST site (http://euroseis.civil.auth.gr): a geologically complex site in Northern Greece, with a permanent strong motion array including surface and downhole stations. Site effects are of great importance here, and records are available from a variety of conditions ranging from soft soil to hard rock. We derive the site-related component of κ (κ0) at 16 stations following two approaches: 1. directly, measuring κ on individual S-wave spectra and regressing to zero distance as per Anderson and Hough (1984), following the procedure proposed by Ktenidou et al. (2012); 2. indirectly, deriving station-specific κ0 values from the high-frequency part of the station transfer functions, which are derived from a source-path-site inversion procedure proposed by Drouet et al. (2008). The agreement in κ0 is good. This supports the notion that κ0 is primarily a site effect, since in the second approach source and path effects are accounted for separately. The two approaches also yield similar results for anelastic attenuation within the frequency range studied: both show low regional Q, comparable to the results of crustal Q studies in Greece. We focus on κ0 values, which range from 0.02 s to 0.08 s depending on site type. As expected, κ0 increases for soft sites, but so does the scatter. Because κ0 is considered a site effect proxy, we examine its correlation with local site

  18. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  19. Estimation of distributional parameters for censored trace level water quality data. 2. Verification and applications

    USGS Publications Warehouse

    Helsel, D.R.; Gilliom, R.J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.

  20. Continuous simulation for flood estimation in ungauged mesoscale catchments of Switzerland - Part II: Parameter regionalisation and flood estimation results

    NASA Astrophysics Data System (ADS)

    Viviroli, Daniel; Mittelbach, Heidi; Gurtz, Joachim; Weingartner, Rolf

    2009-10-01

    SummaryFlood estimations for ungauged mesoscale catchments are as important as they are difficult. So far, empirical and stochastic methods have mainly been used for this purpose. Experience shows, however, that these procedures entail major errors. In order to make further progress in flood estimation, a continuous precipitation-runoff-modelling approach has been developed for practical application in Switzerland using the process-oriented hydrological modelling system PREVAH (Precipitation-Runoff-EVApotranspiration-HRU related model). The main goal of this approach is to achieve discharge hydrographs for any Swiss mesoscale catchment without measurement of discharge. Subsequently, the relevant flood estimations are to be derived from these hydrographs. On the basis of 140 calibrated catchments ( Viviroli et al., 2009b), a parameter regionalisation scheme has been developed to estimate PREVAH's tuneable parameters where calibration is not possible. The scheme is based on three individual parameter estimation approaches, namely Nearest Neighbours (parameter transfer from catchments similar in attribute space), Kriging (parameter interpolation in physical space) and Regression (parameter estimation from relations to catchment attributes). The most favourable results were achieved when the simulations using these three individual regionalisations were combined by computing their median. It will be demonstrated that the framework introduced here yields plausible flood estimations for ungauged Swiss catchments. Comparing a flood with a return period of 100 years to the reference value derived from the observed record, the median error from 49 representative catchments is only -7%, while the error for half of these catchments ranges between -30% and +8%. Additionally, our estimate lies within the statistical 90% confidence interval of the reference value in more than half of these catchments. The average quality of these flood estimations compares well with present

  1. Estimation of genetic parameters for wool fiber diameter measures.

    PubMed

    Iman, N Y; Johnson, C L; Russell, W C; Stobart, R H

    1992-04-01

    Genetic and phenotypic correlations and heritability estimates of side, britch, and core diameters; side and britch CV; side and britch diameter difference; and clean fleece weight were investigated using 385 western white-faced ewes produced by 50 sires and maintained at two locations on a selection study. Data were analyzed using analysis of variance procedures, and effects in the final model included breed of sire-selection line combination, sire within breed-selection line, and location. Heritabilities were estimated by paternal half-sib analysis. Sires within breed-selection line represented a significant source of variation for all traits studied. Location had a significant effect on side diameter, side and britch diameter difference, and clean fleece weight. Age of ewe only affected clean fleece weight. Phenotypic and genetic correlations among side, britch, and core diameter measures were high and positive. Phenotypic correlations ranged from .68 to .75 and genetic correlations ranged from .74 to .89. The genetic correlations between side and britch diameter difference and side diameter or core diameter were small (-.16 and .28, respectively). However, there was a stronger genetic correlation between side and britch diameter difference and britch diameter (.55). Heritability of the difference between side and britch diameter was high (.46 +/- .16) and similar to heritability estimates reported for other wool traits. Results of this study indicate that relatively rapid genetic progress through selection for fiber diameter should be possible. In addition, increased uniformity in fiber diameter should be possible through selection for either side and britch diameter difference or side or britch CV.

  2. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  3. Parameter estimates in binary black hole collisions using neural networks

    NASA Astrophysics Data System (ADS)

    Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.

    2016-10-01

    We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.

  4. Parameter estimation method for improper fractional models and its application to molecular biological systems.

    PubMed

    Tian, Li-Ping; Liu, Lizhi; Wu, Fang-Xiang

    2010-01-01

    Derived from biochemical principles, molecular biological systems can be described by a group of differential equations. Generally these differential equations contain fractional functions plus polynomials (which we call improper fractional model) as reaction rates. As a result, molecular biological systems are nonlinear in both parameters and states. It is well known that it is challenging to estimate parameters nonlinear in a model. However, in fractional functions both the denominator and numerator are linear in the parameters while polynomials are also linear in parameters. Based on this observation, we develop an iterative linear least squares method for estimating parameters in biological systems modeled by improper fractional functions. The basic idea is to transfer optimizing a nonlinear least squares objective function into iteratively solving a sequence of linear least squares problems. The developed method is applied to the estimation of parameters in a metabolism system. The simulation results show the superior performance of the proposed method for estimating parameters in such molecular biological systems.

  5. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    PubMed Central

    Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  6. Estimation of kinetic model parameters in fluorescence optical diffusion tomography.

    PubMed

    Milstein, Adam B; Webb, Kevin J; Bouman, Charles A

    2005-07-01

    We present a technique for reconstructing the spatially dependent dynamics of a fluorescent contrast agent in turbid media. The dynamic behavior is described by linear and nonlinear parameters of a compartmental model or some other model with a deterministic functional form. The method extends our previous work in fluorescence optical diffusion tomography by parametrically reconstructing the time-dependent fluorescent yield. The reconstruction uses a Bayesian framework and parametric iterative coordinate descent optimization, which is closely related to Gauss-Seidel methods. We demonstrate the method with a simulation study.

  7. A New Method For Cosmological Parameter Estimation From SNIa Data

    NASA Astrophysics Data System (ADS)

    March, Marisa; Trotta, R.; Berkes, P.; Starkman, G. D.; Vaudrevange, P. M.

    2011-01-01

    We present a new methodology to extract constraints on cosmological parameters from SNIa data obtained with the SALT2 lightcurve fitter. The power of our Bayesian method lies in its full exploitation of relevant prior information, which is ignored by the usual chisquare approach. Using realistic simulated data sets we demonstrate that our method outperforms the usual chisquare approach 2/3 of the time while achieving better long-term coverage properties. A further benefit of our methodology is its ability to produce a posterior probability distribution for the intrinsic dispersion of SNe. This feature can also be used to detect hidden systematics in the data.

  8. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  9. Laboratory experiments for estimating chemical osmotic parameters of mudstones

    NASA Astrophysics Data System (ADS)

    Miyoshi, S.; Tokunaga, T.; Mogi, K.; Ito, K.; Takeda, M.

    2010-12-01

    Recent studies have quantitatively shown that mudstone can act as semi-permeable membrane and can generate abnormally high pore pressure in sedimentary basins. Reflection coefficient is one of the important properties that affect the chemical osmotic behavior of mudstones. However, not many quantitative studies on the reflection coefficient of mudstones have been done. We have developed a laboratory apparatus to observe chemical osmotic behavior, and a numerical simulation technique to estimate the reflection coefficient and other relating properties of mudstones. A core sample of siliceous mudstone obtained from the drilled core at Horonobe, Japan, was set into the apparatus and was saturated by 0.1mol/L sodium chloride solution. Then, the up-side reservoir was replaced with 0.05mol/L sodium chloride solution, and temporal changes of both pressure and concentration of the solution in both up-side and bottom-side reservoirs were measured. Using the data obtained from the experiment, we estimated the reflection coefficient, effective diffusion coefficient, hydraulic conductivity, and specific storage of the sample by fitting the numerical simulation results with the observed ones. A preliminary numerical simulation of groundwater flow and solute migration was conducted in the area where the core sample was obtained, using the reflection coefficient and other properties obtained from this study. The result suggested that the abnormal pore pressure observed in the region can be explained by the chemical osmosis.

  10. Uncertainties associated with parameter estimation in atmospheric infrasound arrays

    NASA Astrophysics Data System (ADS)

    Szuberla, Curt A. L.; Olson, John V.

    2004-01-01

    This study describes a method for determining the statistical confidence in estimates of direction-of-arrival and trace velocity stemming from signals present in atmospheric infrasound data. It is assumed that the signal source is far enough removed from the infrasound sensor array that a plane-wave approximation holds, and that multipath and multiple source effects are not present. Propagation path and medium inhomogeneities are assumed not to be known at the time of signal detection, but the ensemble of time delays of signal arrivals between array sensor pairs is estimable and corrupted by uncorrelated Gaussian noise. The method results in a set of practical uncertainties that lend themselves to a geometric interpretation. Although quite general, this method is intended for use by analysts interpreting data from atmospheric acoustic arrays, or those interested in designing and deploying them. The method is applied to infrasound arrays typical of those deployed as a part of the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization.

  11. Experimental approach for thermal parameters estimation during glass forming process

    NASA Astrophysics Data System (ADS)

    Abdulhay, B.; Bourouga, B.; Alzetto, F.; Challita, C.

    2016-10-01

    In this paper, an experimental device designed and developedto estimate thermal conditions at the Glass / piston contact interface is presented. This deviceis made of two parts: the upper part contains the piston made of metal and a heating device to raise the temperature of the piston up to 500 °C. The lower part is composed of a lead crucible and a glass sample. The assembly is provided with a heating system, an induction furnace of 6 kW for heating the glass up to 950 °C.The developed experimental procedure has permitted in a previous published study to estimate the Thermal Contact ResistanceTCR using the inverse technique developed by Beck [1]. The semi-transparent character of the glass has been taken into account by an additional radiative heat flux and an equivalent thermal conductivity. After the set-up tests, reproducibility experiments for a specific contact pressure have been carried outwith a maximum dispersion that doesn't exceed 6%. Then, experiments under different conditions for a specific glass forming process regarding the application (Packaging, Buildings and Automobile) were carried out. The objective is to determine, experimentallyfor each application,the typical conditions capable to minimize the glass temperature loss during the glass forming process.

  12. Estimating canopy fuel parameters for Atlantic Coastal Plain forest types.

    SciTech Connect

    Parresol, Bernard, R.

    2007-01-15

    Abstract It is necessary to quantify forest canopy characteristics to assess crown fire hazard, prioritize treatment areas, and design treatments to reduce crown fire potential. A number of fire behavior models such as FARSITE, FIRETEC, and NEXUS require as input four particular canopy fuel parameters: 1) canopy cover, 2) stand height, 3) crown base height, and 4) canopy bulk density. These canopy characteristics must be mapped across the landscape at high spatial resolution to accurately simulate crown fire. Currently no models exist to forecast these four canopy parameters for forests of the Atlantic Coastal Plain, a region that supports millions of acres of loblolly, longleaf, and slash pine forests as well as pine-broadleaf forests and mixed species broadleaf forests. Many forest cover types are recognized, too many to efficiently model. For expediency, forests of the Savannah River Site are categorized as belonging to 1 of 7 broad forest type groups, based on composition: 1) loblolly pine, 2) longleaf pine, 3) slash pine, 4) pine-hardwood, 5) hardwood-pine, 6) hardwoods, and 7) cypress-tupelo. These 7 broad forest types typify forests of the Atlantic Coastal Plain region, from Maryland to Florida.

  13. Recursive adjustment approach for the inversion of the Euler-Liouville Equation

    NASA Astrophysics Data System (ADS)

    Kirschner, S.; Seitz, F.

    2012-04-01

    Earth rotation is physically described by the Euler-Liouville Equation that is based on the balance of angular momentum in the Earth system. The Earth orientation parameters (EOP), polar motion and length of day, are highly precise observed by geodetic methods over many decades. A sensitivity analysis showed that some weakly determined Earth parameters have a great influence on the numerical forward modeling of the EOP. Therefore we concentrate on the inversion of the Euler-Liouville Equation in order to estimate and improve such parameters. A recursive adjustment approach allows the inversion of the Euler-Liouville Equation to be efficient. Here we concentrate on the estimation of parameters related to period and damping of the free rotation of the Earth (Chandler oscillation). Before we apply the approach to the complex Earth system we demonstrate its concept on the simplified example of a spring mass damper system. The spring mass damper system is analogous to the damped Chandler oscillation and the results can directly be transferred. Also the differential equation describing the motion of the spring has the same structure as the Euler-Liouville Equation. Spring constant and damping coefficient describing the anelastic behavior of the system correspond to real and imaginary part of the Earth's pole tide Love number. Therefore the simplified model is ideal for studying various aspects, e.g. the influences of sampling rate, overall time frame, and the number of observations on the numerical results. It is shown that the recursive adjustment approach is an adequate method for the estimation of the spring parameters and therewith for the parameters describing the Earth's rheology. The study is carried out in the frame of the German research unit on Earth Rotation and Global Dynamic Processes.

  14. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    SciTech Connect

    Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.; Elizondo, Marcelo A.; Sun, Yannan; Liu, Chen-Ching; Xu, Yin; Gourisetti, Sri Nikhil Gup

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  15. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  16. Being surveyed can change later behavior and related parameter estimates

    PubMed Central

    Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit

    2011-01-01

    Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314

  17. Parameter estimation and analysis model selections in fluorescence correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Dong, Shiqing; Zhou, Jie; Ding, Xuemei; Wang, Yuhua; Xie, Shusen; Yang, Hongqin

    2016-10-01

    Fluorescence correlation spectroscopy (FCS) is a powerful technique that could provide high temporal resolution and detection for the diffusions of biomolecules at extremely low concentrations. The accuracy of this approach primarily depends on experimental condition requirements and the data analysis model. In this study, we have set up a confocal-based FCS system. And then we used a Rhodamine6G solution to calibrate the system and get the related parameters. An experimental measurement was carried out on one-component solution to evaluate the relationship between a certain number of molecules and concentrations. The results showed FCS system we built was stable and valid. Finally, a two-component solution experiment was carried out to show the importance of analysis model selection. It is a promising method for single molecular diffusion study in living cells.

  18. Tumor parameter estimation considering the body geometry by thermography.

    PubMed

    Hossain, Shazzat; Mohammadi, Farah A

    2016-09-01

    Implementation of non-invasive, non-contact, radiation-free thermal diagnostic tools requires an accurate correlation between surface temperature and interior physiology derived from living bio-heat phenomena. Such associations in the chest, forearm, and natural and deformed breasts have been investigated using finite element analysis (FEA), where the geometry and heterogeneity of an organ are accounted for by creating anatomically-accurate FEA models. The quantitative links are involved in the proposed evolutionary methodology for forecasting unknown Physio-thermo-biological parameters, including the depth, size and metabolic rate of the underlying nodule. A Custom Genetic Algorithm (GA) is tailored to parameterize a tumor by minimizing a fitness function. The study has employed the finite element method to develop simulated data sets and gradient matrix. Furthermore, simulated thermograms are obtained by enveloping the data sets with ±10% random noise.

  19. Modal parameter estimation via shaker vs speaker excitation

    SciTech Connect

    Weaver, H.J.; Burdick, R.B.

    1984-05-01

    When dynamically testing delicate laser components (e.g. an elliptical glass laser disc) it is often impossible to provide a direct contact excitation source such as an impact hammer or shaker. This is because of the delicate and/or brittle nature of the material from which the components are constructed. The alternate approach that is often used in a test of this type is to excite the component with an acoustic speaker. In this paper we describe a small series of tests in which we compare the modal parameters obtained by using a speaker as an excitation source with those obtained on the same object when the excitation was provided by a shaker.

  20. Estimation of Parameters in Latent Class Models with Constraints on the Parameters.

    DTIC Science & Technology

    1986-06-01

    the item parameters. Let us briefly review the elements of latent class models. The reader desiring a thorough introduction can consult Lazarsfeld and...parameters, including most of the models which have been proposed to date. The latent distance model of Lazarsfeld and Henry (1968) and the quasi...Psychometrika, 1964, 29, 115-129. Lazarsfeld , P.F., and Henry, N.W. Latent structure analysis. Boston: Houghton-Mifflin, 1968. L6. - 29 References continued

  1. Assessing the Effect of Model-Data Misfit on the Invariance Property of IRT Parameter Estimates.

    ERIC Educational Resources Information Center

    Fan, Xitao; Ping, Yin

    This study empirically investigated the potential negative effect of item response theory (IRT) model-data misfit on the degree of invariance of: (1) IRT item parameter estimates (item difficulty and discrimination); and (2) IRT person ability parameter estimates. A large-scale statewide assessment program test database was used, for which the…

  2. Evaluating the Robustness of Graded Response Model and Classical Test Theory Parameter Estimates to Deviant Items.

    ERIC Educational Resources Information Center

    Sinar, Evan F.; Zickar, Michael J.

    2002-01-01

    Examined the influence of deviant scale items on item parameter estimates of focal scale items and person parameter estimates through a comparison of item response theory (IRT) and classical test theory (CTT) models. Used Monte Carlo methods to explore results from a pilot investigation of job attitude data. Discusses implications for researchers…

  3. Parameter estimation technique for boundary value problems by spline collocation method

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    A parameter-estimation technique for boundary-integral equations of the second kind is developed. The output least-squares identification technique using the spline collocation method is considered. The convergence analysis for the numerical method is discussed. The results are applied to boundary parameter estimations for two-dimensional Laplace and Helmholtz equations.

  4. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  5. Approximation techniques for parameter estimation and feedback control for distributed models of large flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rosen, I. G.

    1984-01-01

    Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.

  6. Estimation of teleported and gained parameters in a non-inertial frame

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2017-04-01

    Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.

  7. Parameter estimation based synchronization for an epidemic model with application to tuberculosis in Cameroon

    NASA Astrophysics Data System (ADS)

    Bowong, Samuel; Kurths, Jurgen

    2010-10-01

    We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.

  8. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  9. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  10. Frequency-dependent core shifts and parameter estimation in Blazars

    NASA Astrophysics Data System (ADS)

    Agarwal, Aditi

    2016-07-01

    We study the core shift effect in the parsec-scale jet of blazars using the 4.8-36.8 GHz radio light curves obtained from four decades of continuous monitoring. From a piecewise Gaussian fit to each flare, time lags between the observation frequencies and spectral indices (α) based on peak amplitudes (A) are determined. Index k is calculated and found to be ˜1, indicating equipartition between the magnetic field energy density and the particle energy density. A mean magnetic field strength at 1 pc (B1) and at the core (Bcore) are inferred which are found to be consistent with previous estimates. The measure of core position offset is also performed by averaging over all frequency pairs. Based on the statistical trend shown by the measured core radius as a function of frequency, we infer that the synchrotron opacity model may not be valid for all cases. A Fourier periodogram analysis yields power-law slopes in the range -1.6 to -3.5 describing the power spectral density shape and gives bend timescales. This result, and both positive and negative spectral indices, indicate that the flares originate from multiple shocks in a small region. Important objectives met in our study include: the demonstration of the computational efficiency and statistical basis of the piecewise Gaussian fit; consistency with previously reported results; evidence for the core shift dependence on observation frequency and its utility in jet diagnostics in the region close to the resolving limit of very long baseline interferometry observations.

  11. Estimation of cauliflower mass transfer parameters during convective drying

    NASA Astrophysics Data System (ADS)

    Sahin, Medine; Doymaz, İbrahim

    2017-02-01

    The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities ( D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.

  12. Estimation of fatigue damage parameters using guided wave technique

    NASA Astrophysics Data System (ADS)

    Rathod, V. T.; Roy Mahapatra, D.

    2014-03-01

    In the present work we have considered the problem of monitoring a fatigue crack growth in a thin plate specimen. The problem is first solved analytically by modeling the structure with a cyclic plastic zone around the crack. The damaged region is modeled as a visco-elastic zone and other regions are modeled as elastic zones. Using the one-dimensional guided wave model, the reflected and transmitted energies of the guided waves from the fatigue crack and plastic zone are studied. Experimental study of the reflected and transmitted energies is done using guided waves generated and received by piezoelectric wafers. The reflected and transmitted energies are derived at various cycles of fatigue loading till the failure of the structure. Validation of the results from the analytical model is done by comparing the results obtained from the experiments. The reflected and transmitted energy is related to the size of crack size or the magnitude of loading. Using crack size and the nature of loading, a method is proposed to estimate the fatigue life using fracture mechanics approach.

  13. Estimation of cauliflower mass transfer parameters during convective drying

    NASA Astrophysics Data System (ADS)

    Sahin, Medine; Doymaz, İbrahim

    2016-05-01

    The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities (D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.

  14. Estimating stellar wind parameters from low-resolution magnetograms

    NASA Astrophysics Data System (ADS)

    Jardine, M.; Vidotto, A. A.; See, V.

    2017-02-01

    Stellar winds govern the angular momentum evolution of solar-like stars throughout their main-sequence lifetime. The efficiency of this process depends on the geometry of the star's magnetic field. There has been a rapid increase recently in the number of stars for which this geometry can be determined through spectropolarimetry. We present a computationally efficient method to determine the 3D geometry of the stellar wind and to estimate the mass-loss rate and angular momentum loss rate based on these observations. Using solar magnetograms as examples, we quantify the extent to which the values obtained are affected by the limited spatial resolution of stellar observations. We find that for a typical stellar surface resolution of 20o-30o, predicted wind speeds are within 5 per cent of the value at full resolution. Mass-loss rates and angular momentum loss rates are within 5-20 per cent. In contrast, the predicted X-ray emission measures can be underestimated by one-to-two orders of magnitude, and their rotational modulations by 10-20 per cent.

  15. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  16. Improvement in Recursive Hierarchical Segmentation of Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2006-01-01

    A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.

  17. Synchronization-based approach for estimating all model parameters of chaotic systems.

    PubMed

    Konnur, Rahul

    2003-02-01

    The problem of dynamic estimation of all parameters of a model representing chaotic and hyperchaotic systems using information from a scalar measured output is solved. The variational calculus based method is robust in the presence of noise, enables online estimation of the parameters and is also able to rapidly track changes in operating parameters of the experimental system. The method is demonstrated using the Lorenz, Rossler chaos, and hyperchaos models. Its possible application in decoding communications using chaos is discussed.

  18. Synchronization-based approach for estimating all model parameters of chaotic systems

    NASA Astrophysics Data System (ADS)

    Konnur, Rahul

    2003-02-01

    The problem of dynamic estimation of all parameters of a model representing chaotic and hyperchaotic systems using information from a scalar measured output is solved. The variational calculus based method is robust in the presence of noise, enables online estimation of the parameters and is also able to rapidly track changes in operating parameters of the experimental system. The method is demonstrated using the Lorenz, Rossler chaos, and hyperchaos models. Its possible application in decoding communications using chaos is discussed.

  19. Bayesian estimation of regularization parameters for deformable surface models

    SciTech Connect

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-02-20

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.

  20. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  1. Experimental estimation of one-parameter qubit gates in the presence of phase diffusion

    SciTech Connect

    Brivio, Davide; Cialdi, Simone; Vezzoli, Stefano; Gebrehiwot, Berihu Teklu; Genoni, Marco G.; Olivares, Stefano; Paris, Matteo G. A.

    2010-01-15

    We address estimation of one-parameter qubit gates in the presence of phase diffusion. We evaluate the ultimate quantum limits to precision, seek optimal probes and measurements, and demonstrate an optimal estimation scheme for polarization encoded optical qubits. An adaptive method to achieve optimal estimation in any working regime is also analyzed in detail and experimentally implemented.

  2. Bias-compensation-based least-squares estimation with a forgetting factor for output error models with white noise

    NASA Astrophysics Data System (ADS)

    Wu, A. G.; Chen, S.; Jia, D. L.

    2016-05-01

    In this paper, the bias-compensation-based recursive least-squares (LS) estimation algorithm with a forgetting factor is proposed for output error models. First, for the unknown white noise, the so-called weighted average variance is introduced. With this weighted average variance, a bias-compensation term is first formulated to achieve the bias-eliminated estimates of the system parameters. Then, the weighted average variance is estimated. Finally, the final estimation algorithm is obtained by combining the estimation of the weighted average variance and the recursive LS estimation algorithm with a forgetting factor. The effectiveness of the proposed identification algorithm is verified by a numerical example.

  3. Parameter estimation using carbon-14 ages: Lessons from the Danube-Tisza interfluvial region of Hungary

    USGS Publications Warehouse

    Sanford, W.E.; Deak, J.; Revesz, K.

    2002-01-01

    Parameter estimation was conducted on a groundwater model of the Danube-Tisza interfluvial region of Hungary. The model was calibrated using 300 water levels and 48 14C ages. The model provided a test of regression methods for a system with a large number of observations. Up to 103 parameters representing horizontal and vertical hydraulic conductivities and boundary conductances were assigned using point values and bilinear interpolation between points. The lowest errors were obtained using an iterative approach with groups of parameters, rather than estimating all of the parameters simultaneously. The model with 48 parameters yielded the lowest standard error of regression.

  4. Hydrological Parameter Estimation (HYPE) System for Bayesian Exploration of Parameter Sensitivities in an Arctic Watershed

    NASA Astrophysics Data System (ADS)

    Morton, D.; Bolton, W. R.; Endalamaw, A. M.; Young, J. M.; Hinzman, L. D.

    2014-12-01

    As part of a study on how vegetation water use and permafrost dynamics impact stream flow in the boreal forest discontinuous permafrost zone, a Bayesian modeling framework has been developed to assess the effect of parameter uncertainties in an integrated vegetation water use and simple, first-order, non-linear hydrological model. Composed of a front-end Bayes driver and a backend interactive hydrological model, the system is meant to facilitate rapid execution of seasonal simulations driven by hundreds to thousands of parameter variations to analyze the sensitivity of the system to a varying parameter space in order to derive more effective parameterizations for larger-scale simulations. The backend modeling component provides an Application Programming Interface (API) for introducing parameters in the form of constant or time-varying scalars or spatially distributed grids. In this work, we describe the basic structure of the flexible, object-oriented modeling system and test its performance against collected basin data from headwater catchments of varying permafrost extent and ecosystem structure (deciduous versus coniferous vegetation). We will also analyze model and sub-model (evaporation, transpiration, precipitation and streamflow) sensitivity to parameters through application of the system to two catchment basins of the Caribou-Poker Creeks Research Watershed (CPCRW) located in Interior Alaska. The C2 basin is a mostly permafrost-free, south facing catchment dominated by deciduous vegetation. The C3 basin is underlain by more than 50% permafrost and is dominated by coniferous vegetation. The ultimate goal of the modeling system is to improve parameterizations in mesoscale hydrologic models, and application of the HYPE system to the well-instrumented CPCRW provides a valuable opportunity for experimentation.

  5. A new constant memory recursion for hidden Markov models.

    PubMed

    Bartolucci, Francesco; Pandolfi, Silvia

    2014-02-01

    We develop the recursion for hidden Markov (HM) models proposed by Bartolucci and Besag (2002), and we show how it may be used to implement an estimation algorithm for these models that requires an amount of memory not depending on the length of the observed series of data. This recursion allows us to obtain the conditional distribution of the latent state at every occasion, given the previous state and the observed data. With respect to the estimation algorithm based on the well-known Baum-Welch recursions, which requires an amount of memory that increases with the sample size, the proposed algorithm also has the advantage of not requiring dummy renormalizations to avoid numerical problems. Moreover, it directly allows us to perform global decoding of the latent sequence of states, without the need of a Viterbi method and with a consistent reduction of the memory requirement with respect to the latter. The proposed approach is compared, in terms of computing time and memory requirement, with the algorithm based on the Baum-Welch recursions and with the so-called linear memory algorithm of Churbanov and Winters-Hilt. The comparison is also based on a series of simulations involving an HM model for continuous time-series data.

  6. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  7. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter

    PubMed Central

    Reddy, Chinthala P.; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956

  8. Core Recursive Hierarchical Image Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James

    2011-01-01

    The Recursive Hierarchical Image Segmentation (RHSEG) software has been repackaged to provide a version of the RHSEG software that is not subject to patent restrictions and that can be released to the general public through NASA GSFC's Open Source release process. Like the Core HSEG Software Package, this Core RHSEG Software Package also includes a visualization program called HSEGViewer along with a utility program HSEGReader. It also includes an additional utility program called HSEGExtract. The unique feature of the Core RHSEG package is that it is a repackaging of the RHSEG technology designed to specifically avoid the inclusion of the certain software technology. Unlike the Core HSEG package, it includes the recursive portions of the technology, but does not include processing window artifact elimination technology.

  9. Online Vegetation Parameter Estimation in Passive Microwave Regime for Soil Moisture Estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Remote sensing observations in the passive microwave regime can be used to estimate surface soil moisture over land at global and regional scales. Soil moisture is important to applications such as weather forecasting, climate and agriculture. One approach to estimating soil moisture from remote sen...

  10. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  11. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  12. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  13. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  14. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  15. Parameter estimation for chaotic systems based on improved boundary chicken swarm optimization

    NASA Astrophysics Data System (ADS)

    Chen, Shaolong; Yan, Renhuan

    2016-10-01

    Estimating unknown parameters for chaotic system is a key problem in the field of chaos control and synchronization. Through constructing an appropriate fitness function, parameter estimation of chaotic system could be converted to a multidimensional parameter optimization problem. In this paper, a new method base on improved boundary chicken swarm optimization (IBCSO) algorithm is proposed for solving the problem of parameter estimation in chaotic system. However, to the best of our knowledge, there is no published research work on chicken swarm optimization for parameters estimation of chaotic system. Computer simulation based on Lorenz system and comparisons with chicken swarm optimization, particle swarm optimization, and genetic algorithm shows the effectiveness and feasibility of the proposed method.

  16. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    NASA Astrophysics Data System (ADS)

    Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

    2016-03-01

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO-ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO-ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO-ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO-ACO is a very powerful tool for parameter estimation with high accuracy and low deviations.

  17. Kinetic parameters estimation in an anaerobic digestion process using successive quadratic programming.

    PubMed

    Aceves-Lara, C A; Aguilar-Garnica, E; Alcaraz-González, V; González-Reynoso, O; Steyer, J P; Dominguez-Beltran, J L; González-Alvarez, V

    2005-01-01

    In this work, an optimization method is implemented in an anaerobic digestion model to estimate its kinetic parameters and yield coefficients. This method combines the use of advanced state estimation schemes and powerful nonlinear programming techniques to yield fast and accurate estimates of the aforementioned parameters. In this method, we first implement an asymptotic observer to provide estimates of the non-measured variables (such as biomass concentration) and good guesses for the initial conditions of the parameter estimation algorithm. These results are then used by the successive quadratic programming (SQP) technique to calculate the kinetic parameters and yield coefficients of the anaerobic digestion process. The model, provided with the estimated parameters, is tested with experimental data from a pilot-scale fixed bed reactor treating raw industrial wine distillery wastewater. It is shown that SQP reaches a fast and accurate estimation of the kinetic parameters despite highly noise corrupted experimental data and time varying inputs variables. A statistical analysis is also performed to validate the combined estimation method. Finally, a comparison between the proposed method and the traditional Marquardt technique shows that both yield similar results; however, the calculation time of the traditional technique is considerable higher than that of the proposed method.

  18. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  19. A Cramer-Rao Type Lower Bound for Essentially Unbiased Parameter Estimation

    DTIC Science & Technology

    1992-01-03

    should apply to the entire class of estimators with acceptably small bias. In this report a new CR-type lower bound is derived which takes into account a...unbiased CR bound. If an upper bound on the bias gradient ot tne estimator is specified, our lower bound on estimator variance can subsequently be applied ...multiple parameters. Finally, Section 8 applies the new bound to covariance estimation for a pair of HD Gaussian sequences. 2 2. PRELIMINARIES 2.1

  20. Algebraic parameters identification of DC motors: methodology and analysis

    NASA Astrophysics Data System (ADS)

    Becedas, J.; Mamani, G.; Feliu, V.

    2010-10-01

    A fast, non-asymptotic, algebraic parameter identification method is applied to an uncertain DC motor to estimate the uncertain parameters: viscous friction coefficient and inertia. In this work, the methodology is developed and analysed, its convergence, a comparative study between the traditional recursive least square method and the algebraic identification method is carried out, and an analysis of the estimator in a noisy system is presented. Computer simulations were carried out to validate the suitability of the identification algorithm.

  1. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  2. NASA Dryden's experience in parameter estimation and its uses in flight test

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1982-01-01

    An explanation of the parameter estimation method used at the Dryden Flight Research Facility is presented, and an overview is provided of experience related to the employment of this method, taking into account the utilization of this experience in flight tests. According to a definition of the aircraft parameter estimation problem, the system investigated is asumed to be modeled by a set of dynamic equations containing unknown parameters. To determine the values of the unknown parameters, the system is excited by a suitable input, and the input and actual system response are measured. The values of the unknown parameters are then inferred, based on the requirement that the model response to the given input match the actual system response. Examples of parameter estimation in flight test are discussed, giving attention to the F-14 fighter, the HiMAT (high maneuverable aircraft technology) vehicle, and the Space Shuttle.

  3. On-line Parameter Estimation of Time-varying Systems by Radial Basis Function Networks

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yasuhide; Tanaka, Shinichi; Okita, Tsuyoshi

    This paper proposes a new on-line parameter estimation method with radial basis function networks for time-varying linear discrete-time systems. The time-varying parameters of the system are expressed with the radial basis function networks. These parameters are estimated by the nonlinear optimization technique, and the setting rules of the initial values in the optimization are proposed. The system parameters are usually unknown because they are changed by the circumstance conditions. Then, it is reasonable that the structures of the radial basis function networks are regulated according to the change of parameters. The minimum description length criterion studied in the encoding theory is applied to select the network structures. It is demonstrated in digital simulation that the proposed on-line estimation method succeeded to reduce the computaion time extremely, for time-varying parameters system.

  4. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  5. Simultaneous estimation of land surface scheme states and parameters using the ensemble Kalman filter: identical twin experiments

    NASA Astrophysics Data System (ADS)

    Nie, S.; Zhu, J.; Luo, Y.

    2011-08-01

    The performance of the ensemble Kalman filter (EnKF) in soil moisture assimilation applications is investigated in the context of simultaneous state-parameter estimation in the presence of uncertainties from model parameters, soil moisture initial condition and atmospheric forcing. A physically based land surface model is used for this purpose. Using a series of identical twin experiments in two kinds of initial parameter distribution (IPD) scenarios, the narrow IPD (NIPD) scenario and the wide IPD (WIPD) scenario, model-generated near surface soil moisture observations are assimilated to estimate soil moisture state and three hydraulic parameters (the saturated hydraulic conductivity, the saturated soil moisture suction and a soil texture empirical parameter) in the model. The estimation of single imperfect parameter is successful with the ensemble mean value of all three estimated parameters converging to their true values respectively in both NIPD and WIPD scenarios. Increasing the number of imperfect parameters leads to a decline in the estimation performance. A wide initial distribution of estimated parameters can produce improved simultaneous multi-parameter estimation performances compared to that of the NIPD scenario. However, when the number of estimated parameters increased to three, not all parameters were estimated successfully for both NIPD and WIPD scenarios. By introducing constraints between estimated hydraulic parameters, the performance of the constrained three-parameter estimation was successful, even if temporally sparse observations were available for assimilation. The constrained estimation method can reduce RMSE much more in soil moisture forecasting compared to the non-constrained estimation method and traditional non-parameter-estimation assimilation method. The benefit of this method in estimating all imperfect parameters simultaneously can be fully demonstrated when the corresponding non-constrained estimation method displays a relatively

  6. A subspace-based parameter estimation algorithm for Nakagami-m fading channels

    NASA Astrophysics Data System (ADS)

    Dianat, Sohail; Rao, Raghuveer

    2010-04-01

    Estimation of channel fading parameters is an important task in the design of communication links such as maximum ratio combining (MRC). The MRC weights are directly related to the fading channel coefficients. In this paper, we propose a subspace based parameter estimation algorithm for the estimation of the parameters of Nakagami-m fading channels in the presence of additive white Gaussian noise. Comparisons of our proposed approach are made with other techniques available in the literature. The performance of the algorithm with respect to the Cramer-Rao bound (CRB) is investigated. Computer simulation results for different signal to noise ratios (SNR) are presented.

  7. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  8. Parameters estimation of sandwich beam model with rigid polyurethane foam core

    NASA Astrophysics Data System (ADS)

    Barbieri, Nilson; Barbieri, Renato; Winikes, Luiz Carlos

    2010-02-01

    In this work, the physical parameters of sandwich beams made with the association of hot-rolled steel, Polyurethane rigid foam and High Impact Polystyrene, used for the assembly of household refrigerators and food freezers are estimated using measured and numeric frequency response functions (FRFs). The mathematical models are obtained using the finite element method (FEM) and the Timoshenko beam theory. The physical parameters are estimated using the amplitude correlation coefficient and genetic algorithm (GA). The experimental data are obtained using the impact hammer and four accelerometers displaced along the sample (cantilevered beam). The parameters estimated are Young's modulus and the loss factor of the Polyurethane rigid foam and the High Impact Polystyrene.

  9. Estimability and dependency analysis of model parameters based on delay coordinates

    NASA Astrophysics Data System (ADS)

    Schumann-Bischoff, J.; Luther, S.; Parlitz, U.

    2016-09-01

    In data-driven system identification, values of parameters and not observed variables of a given model of a dynamical system are estimated from measured time series. We address the question of estimability and redundancy of parameters and variables, that is, whether unique results can be expected for the estimates or whether, for example, different combinations of parameter values would provide the same measured output. This question is answered by analyzing the null space of the linearized delay coordinates map. Examples with zero-dimensional, one-dimensional, and two-dimensional null spaces are presented employing the Hindmarsh-Rose model, the Colpitts oscillator, and the Rössler system.

  10. GEODYN system description, volume 1. [computer program for estimation of orbit and geodetic parameters

    NASA Technical Reports Server (NTRS)

    Chin, M. M.; Goad, C. C.; Martin, T. V.

    1972-01-01

    A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.

  11. Modeling of Aircraft Unsteady Aerodynamic Characteristics/Part 3 - Parameters Estimated from Flight Data. Part 3; Parameters Estimated from Flight Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Noderer, Keith D.

    1996-01-01

    A nonlinear least squares algorithm for aircraft parameter estimation from flight data was developed. The postulated model for the analysis represented longitudinal, short period motion of an aircraft. The corresponding aerodynamic model equations included indicial functions (unsteady terms) and conventional stability and control derivatives. The indicial functions were modeled as simple exponential functions. The estimation procedure was applied in five examples. Four of the examples used simulated and flight data from small amplitude maneuvers to the F-18 HARV and X-31A aircraft. In the fifth example a rapid, large amplitude maneuver of the X-31 drop model was analyzed. From data analysis of small amplitude maneuvers ft was found that the model with conventional stability and control derivatives was adequate. Also, parameter estimation from a rapid, large amplitude maneuver did not reveal any noticeable presence of unsteady aerodynamics.

  12. The Precision of Parameter Estimation for Dephasing Model Under Squeezed Reservoir

    NASA Astrophysics Data System (ADS)

    Wu, Shao-xiong; Yu, Chang-shui

    2017-04-01

    We study the precision of parameter estimation for dephasing model under squeezed environment. We analytically calculate the dephasing factor γ( t) and obtain the analytic quantum Fisher information (QFI) for the amplitude parameter α and the phase parameter ϕ. It is shown that the QFI for the amplitude parameter α is invariant in the whole process, while the QFI for the phase parameter ϕ strongly depends on the reservoir squeezing. It is shown that the QFI can be enhanced for appropriate squeeze parameters r and θ. Finally, we also investigate the effects of temperature on the QFI.

  13. Parameter estimation by fixed point of function of information processing intensity

    NASA Astrophysics Data System (ADS)

    Jankowski, Robert; Makowski, Marcin; Piotrowski, Edward W.

    2014-12-01

    We present a new method of estimating the dispersion of a distribution which is based on the surprising property of a function that measures information processing intensity. It turns out that this function has a maximum at its fixed point. Fixed-point equation is used to estimate the parameter of the distribution that is of interest to us. The main result consists in showing that only part of available experimental data is relevant for the parameters estimation process. We illustrate the estimation method by using the example of an exponential distribution.

  14. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  15. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  16. Estimating effective model parameters for heterogeneous unsaturated flow using error models for bias correction

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Huisman, J. A.

    2012-06-01

    Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.

  17. Effect of Medium Symmetries on Limiting the Number of Parameters Estimated with Polarimetric SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1999-01-01

    The addition of interferometric backscattering pairs to the conventional polarimetric synthetic aperture radar (SAR) data over forests and other vegetated areas increases the dimensionality of the data space, in principle enabling the estimation of a larger number of parameters.

  18. Likelihood parameter estimation for calibrating a soil moisture using radar backscatter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assimilating soil moisture information contained in synthetic aperture radar imagery into land surface model predictions can be done using a calibration, or parameter estimation, approach. The presence of speckle, however, necessitates aggregating backscatter measurements over large land areas in or...

  19. Distributed parameter estimation for NASA Mini-Mast truss through displacement measurements

    NASA Technical Reports Server (NTRS)

    Huang, Jen-Kuang; Shen, Ji-Yao; Taylor, Lawrence W., Jr.

    1991-01-01

    Most methods of system identification of large flexible structures by far are based on the lumped parameter approach. Because of the considerable computational burden due to the large number of unknown parameters, distributed parameter approach, which greatly decreases the number of unknowns, has being investigated. In this paper a distributed parameter model for the estimation of modal characteristics of NASA Mini-Mast truss has been formulated. Both Bernoulli-Euler beam and Timoshenko beam equations are used to characterize the lateral bending vibrations of the truss. The measurement of the lateral displacement at the tip of the truss is provided to the maximum likelihood estimator. Closed-form solutions of the partial differential equations and closed-form expressions of the sensitivity functions are derived so that the estimation algorithm is highly efficient. The resulting estimates from test data by using Timoshenko beam model are found to be comparable to those derived from finite element analysis.

  20. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.