Science.gov

Sample records for recursive parameter estimation

  1. Chandrasekhar-type algorithms for fast recursive estimation in linear systems with constant parameters

    NASA Technical Reports Server (NTRS)

    Choudhury, A. K.; Djalali, M.

    1975-01-01

    In this recursive method proposed, the gain matrix for the Kalman filter and the convariance of the state vector are computed not via the Riccati equation, but from certain other equations. These differential equations are of Chandrasekhar-type. The 'invariant imbedding' idea resulted in the reduction of the basic boundary value problem of transport theory to an equivalent initial value system, a significant computational advance. Initial value experience showed that there is some computational savings in the method and the loss of positive definiteness of the covariance matrix is less vulnerable.

  2. Non-linear parameter estimation with Volterra series using the method of recursive iteration through harmonic probing

    NASA Astrophysics Data System (ADS)

    Chatterjee, Animesh; Vyas, Nalinaksh S.

    2003-12-01

    Volterra series provides a platform for non-linear response representation and definition of higher order frequency response functions (FRFs). It has been extensively used in non-parametric system identification through measurement of first and higher order FRFs. A parametric system identification approach has been adopted in the present study. The series response structure is explored for parameter estimation of polynomial form non-linearity. First and higher order frequency response functions are extracted from the measured response harmonic amplitudes through recursive iteration. Relationships between higher order FRFs and first order FRF are then employed to estimate the non-linear parameters. Excitation levels are selected for minimum series approximation error and the number of terms in the series is controlled according to convergence requirement. The problem of low signal strength of higher harmonics is investigated and a measurability criterion is proposed for selection of excitation level and range of excitation frequency. The procedure is illustrated through numerical simulation for a Duffing oscillator. Robustness of the estimation procedure in the presence of measurement noise is also investigated.

  3. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  4. Online state of charge and model parameters estimation of the LiFePO4 battery in electric vehicles using multiple adaptive forgetting factors recursive least-squares

    NASA Astrophysics Data System (ADS)

    Duong, Van-Huan; Bastawrous, Hany Ayad; Lim, KaiChin; See, Khay Wai; Zhang, Peng; Dou, Shi Xue

    2015-11-01

    This paper deals with the contradiction between simplicity and accuracy of the LiFePO4 battery states estimation in the electric vehicles (EVs) battery management system (BMS). State of charge (SOC) and state of health (SOH) are normally obtained from estimating the open circuit voltage (OCV) and the internal resistance of the equivalent electrical circuit model of the battery, respectively. The difficulties of the parameters estimation arise from their complicated variations and different dynamics which require sophisticated algorithms to simultaneously estimate multiple parameters. This, however, demands heavy computation resources. In this paper, we propose a novel technique which employs a simplified model and multiple adaptive forgetting factors recursive least-squares (MAFF-RLS) estimation to provide capability to accurately capture the real-time variations and the different dynamics of the parameters whilst the simplicity in computation is still retained. The validity of the proposed method is verified through two standard driving cycles, namely Urban Dynamometer Driving Schedule and the New European Driving Cycle. The proposed method yields experimental results that not only estimated the SOC with an absolute error of less than 2.8% but also characterized the battery model parameters accurately.

  5. Parameter estimation of a three-axis spacecraft simulator using recursive least-squares approach with tracking differentiator and Extended Kalman Filter

    NASA Astrophysics Data System (ADS)

    Xu, Zheyao; Qi, Naiming; Chen, Yukun

    2015-12-01

    Spacecraft simulators are widely used to study the dynamics, guidance, navigation, and control of a spacecraft on the ground. A spacecraft simulator can have three rotational degrees of freedom by using a spherical air-bearing to simulate a frictionless and micro-gravity space environment. The moment of inertia and center of mass are essential for control system design of ground-based three-axis spacecraft simulators. Unfortunately, they cannot be known precisely. This paper presents two approaches, i.e. a recursive least-squares (RLS) approach with tracking differentiator (TD) and Extended Kalman Filter (EKF) method, to estimate inertia parameters. The tracking differentiator (TD) filter the noise coupled with the measured signals and generate derivate of the measured signals. Combination of two TD filters in series obtains the angular accelerations that are required in RLS (TD-TD-RLS). Another method that does not need to estimate the angular accelerations is using the integrated form of dynamics equation. An extended TD (ETD) filter which can also generate the integration of the function of signals is presented for RLS (denoted as ETD-RLS). States and inertia parameters are estimated simultaneously using EKF. The observability is analyzed. All proposed methods are illustrated by simulations and experiments.

  6. Recursive least square vehicle mass estimation based on acceleration partition

    NASA Astrophysics Data System (ADS)

    Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

    2014-05-01

    Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

  7. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    SciTech Connect

    K. MUSKE; J. HOWSE

    2000-09-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.

  8. Recursive bias estimation for high dimensional smoothers

    SciTech Connect

    Hengartner, Nicolas W; Matzner-lober, Eric; Cornillon, Pierre - Andre

    2008-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  9. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  10. Vision-based recursive estimation of rotorcraft obstacle locations

    NASA Technical Reports Server (NTRS)

    Leblanc, D. J.; Mcclamroch, N. H.

    1992-01-01

    The authors address vision-based passive ranging during nap-of-the-earth (NOE) rotorcraft flight. They consider the problem of estimating the relative location of identifiable features on nearby obstacles, assuming a sequence of noisy camera images and imperfect measurements of the camera's translation and rotation. An iterated extended Kalman filter is used to provide recursive range estimation. The correspondence problem is simplified by predicting and tracking each feature's image within the Kalman filter framework. Simulation results are presented which show convergent estimates and generally successful feature point tracking. Estimation performance degrades for features near the optical axis and for accelerating motions. Image tracking is also sensitive to angular rate.

  11. A Precision Recursive Estimate for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B.

    1980-01-01

    A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.

  12. Recursive Estimation for the Tracking of Radioactive Sources

    SciTech Connect

    Howse, J.W.; Muske, K.R.; Ticknor, L.O.

    1999-02-01

    This paper describes a recursive estimation algorithm used for tracking the physical location of radioactive sources in real-time as they are moved around in a facility. The al- gorithm is a nonlinear least squares estimation that mini- mizes the change in, the source location and the deviation between measurements and model predictions simultane- ously. The measurements used to estimate position consist of four count rates reported by four different gamma ray de tectors. There is an uncertainty in the source location due to the variance of the detected count rate. This work repre- sents part of a suite of tools which will partially automate security and safety assessments, allow some assessments to be done remotely, and provide additional sensor modalities with which to make assessments.

  13. Recursive estimation for the tracking of radioactive sources

    SciTech Connect

    Howse, J.W.; Ticknor, L.O.; Muske, K.R.

    1998-12-31

    This paper describes a recursive estimation algorithm used for tracking the physical location of radioactive sources in real-time as they are moved around in a facility. The algorithm is related to a nonlinear least squares estimation that minimizes the change in the source location and the deviation between measurements and model predictions simultaneously. The measurements used to estimate position consist of four count rates reported by four different gamma ray detectors. There is an uncertainty in the source location due to the large variance of the detected count rate. This work represents part of a suite of tools which will partially automate security and safety assessments, allow some assessments to be done remotely, and provide additional sensor modalities with which to make assessments.

  14. Grid Based Nonlinear Filtering Revisited: Recursive Estimation & Asymptotic Optimality

    NASA Astrophysics Data System (ADS)

    Kalogerias, Dionysios S.; Petropulu, Athina P.

    2016-08-01

    We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The \\textit{Markovian} type and the \\textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of \\textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.

  15. Round-off error propagation in four generally applicable, recursive, least-squares-estimation schemes

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    The numerical robustness of four generally applicable, recursive, least-squares-estimation schemes is analyzed by means of a theoretical round-off propagation study. This study highlights a number of practical, interesting insights of widely used recursive least-squares schemes. These insights have been confirmed in an experimental study as well.

  16. Time-varying modal parameters identification of a spacecraft with rotating flexible appendage by recursive algorithm

    NASA Astrophysics Data System (ADS)

    Ni, Zhiyu; Mu, Ruinan; Xun, Guangbin; Wu, Zhigang

    2016-01-01

    The rotation of spacecraft flexible appendage may cause changes in modal parameters. For this time-varying system, the computation cost of the frequently-used singular value decomposition (SVD) identification method is high. Some control problems, such as the self-adaptive control, need the latest modal parameters to update the controller parameters in time. In this paper, the projection approximation subspace tracking (PAST) recursive algorithm is applied as an alternative method to identify the time-varying modal parameters. This method avoids the SVD by signal subspace projection and improves the computational efficiency. To verify the ability of this recursive algorithm in spacecraft modal parameters identification, a spacecraft model with rapid rotational appendage, Soil Moisture Active/Passive (SMAP) satellite, is established, and the time-varying modal parameters of the satellite are identified recursively by designing the input and output signals. The results illustrate that this recursive algorithm can obtain the modal parameters in the high signal noise ratio (SNR) and it has better computational efficiency than the SVD method. Moreover, to improve the identification precision of this recursive algorithm in the low SNR, the wavelet de-noising technology is used to decrease the effect of noises.

  17. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  18. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  19. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    NASA Astrophysics Data System (ADS)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2009-12-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  20. Recursive bias estimation for high dimensional regression smoothers

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct of the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in details the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting, For multivariate thin plate spline smoother, we proved that our procedure adapts to the correct and unknown order of smoothness for estimating an unknown function m belonging to H({nu}) (Sobolev space where m should be bigger than d/2). We apply our method to simulated and real data and show that our method compares favorably with existing procedures.

  1. Recursive estimation techniques for detection of small objects in infrared image data

    NASA Astrophysics Data System (ADS)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  2. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  3. Parameter adaptive estimation of random processes

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Vanlandingham, H. F.

    1975-01-01

    This paper is concerned with the parameter adaptive least squares estimation of random processes. The main result is a general representation theorem for the conditional expectation of a random variable on a product probability space. Using this theorem along with the general likelihood ratio expression, the least squares estimate of the process is found in terms of the parameter conditioned estimates. The stochastic differential for the a posteriori probability and the stochastic differential equation for the a posteriori density are found by using simple stochastic calculus on the representations obtained. The results are specialized to the case when the parameter has a discrete distribution. The results can be used to construct an implementable recursive estimator for certain types of nonlinear filtering problems. This is illustrated by some simple examples.

  4. Parameter Estimation with Ignorance

    NASA Astrophysics Data System (ADS)

    Du, H.; Smith, L. A.

    2012-04-01

    Parameter estimation in nonlinear models is a common task, and one for which there is no general solution at present. In the case of linear models, the distribution of forecast errors provides a reliable guide to parameter estimation, but in nonlinear models the facts that (1) predictability may vary with location in state space, and that (2) the distribution of forecast errors is expected not to be Normal, suggests that parameter estimates based on least squares methods may be systematically biased. Parameter estimation for nonlinear systems based on variations in the accuracy of probability forecasts is considered. Empirical results for several chaotic systems (the Logistic Map, the Henon Map and the 12-D Lorenz96 flow) are presented at various noise levels and sampling rates. Selecting parameter values by minimizing Ignorance, a proper local skill score for continuous probability forecasts as a function of the parameter values is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors, the ability of the model to shadow the observations or model synchronization. As expected, it is more effective when the forecast error distributions are non-Gaussian. The goal of parameter estimation is not defined uniquely when the model class is imperfect. In short, the desired parameter values can be expected to be a function of the application for which they are determined. Parameter estimation in this imperfect model scenario is also discussed. Initial experiments suggest that our approach is also useful for identifying "best" parameter in an imperfect model as long as the notion of "best" is well defined. The information deficit, defined as the difference between the Empirical Ignorance and Implied Ignorance can be used to identify remaining forecast system inadequacy, in both perfect and imperfect model scenario.

  5. 2-D impulse noise suppression by recursive gaussian maximum likelihood estimation.

    PubMed

    Chen, Yang; Yang, Jian; Shu, Huazhong; Shi, Luyao; Wu, Jiasong; Luo, Limin; Coatrieux, Jean-Louis; Toumoulin, Christine

    2014-01-01

    An effective approach termed Recursive Gaussian Maximum Likelihood Estimation (RGMLE) is developed in this paper to suppress 2-D impulse noise. And two algorithms termed RGMLE-C and RGMLE-CS are derived by using spatially-adaptive variances, which are respectively estimated based on certainty and joint certainty & similarity information. To give reliable implementation of RGMLE-C and RGMLE-CS algorithms, a novel recursion stopping strategy is proposed by evaluating the estimation error of uncorrupted pixels. Numerical experiments on different noise densities show that the proposed two algorithms can lead to significantly better results than some typical median type filters. Efficient implementation is also realized via GPU (Graphic Processing Unit)-based parallelization techniques.

  6. Recursive Estimation of the Stein Center of SPD Matrices & its Applications*

    PubMed Central

    Salehian, Hesamoddin; Cheng, Guang; Ho, Jeffrey

    2014-01-01

    Symmetric positive-definite (SPD) matrices are ubiquitous in Computer Vision, Machine Learning and Medical Image Analysis. Finding the center/average of a population of such matrices is a common theme in many algorithms such as clustering, segmentation, principal geodesic analysis, etc. The center of a population of such matrices can be defined using a variety of distance/divergence measures as the minimizer of the sum of squared distances/divergences from the unknown center to the members of the population. It is well known that the computation of the Karcher mean for the space of SPD matrices which is a negatively-curved Riemannian manifold is computationally expensive. Recently, the LogDet divergence-based center was shown to be a computationally attractive alternative. However, the LogDet-based mean of more than two matrices can not be computed in closed form, which makes it computationally less attractive for large populations. In this paper we present a novel recursive estimator for center based on the Stein distance – which is the square root of the LogDet divergence – that is significantly faster than the batch mode computation of this center. The key theoretical contribution is a closed-form solution for the weighted Stein center of two SPD matrices, which is used in the recursive computation of the Stein center for a population of SPD matrices. Additionally, we show experimental evidence of the convergence of our recursive Stein center estimator to the batch mode Stein center. We present applications of our recursive estimator to K-means clustering and image indexing depicting significant time gains over corresponding algorithms that use the batch mode computations. For the latter application, we develop novel hashing functions using the Stein distance and apply it to publicly available data sets, and experimental results have shown favorable comparisons to other competing methods. PMID:25350135

  7. Evaluation of the recursive model approach for estimating particulate matter infiltration efficiencies using continuous light scattering data.

    PubMed

    Allen, Ryan; Wallace, Lance; Larson, Timothy; Sheppard, Lianne; Liu, Lee-Jane Sally

    2007-08-01

    Quantifying particulate matter (PM) infiltration efficiencies (F(inf)) in individual homes is an important part of PM exposure assessment because individuals spend the majority of time indoors. While F(inf) of fine PM has most commonly been estimated using tracer species such as sulfur, here we evaluate an alternative that does not require particle collection, weighing and compositional analysis, and can be applied in situations with indoor sources of sulfur, such as environmental tobacco smoke, gas pilot lights, and humidifier use. This alternative method involves applying a recursive mass balance model (recursive model, RM) to continuous indoor and outdoor concentration measurements (e.g., light scattering data from nephelometers). We show that the RM can reliably estimate F(inf), a crucial parameter for determining exposure to particles of outdoor origin. The RM F(inf) estimates showed good agreement with the conventional filter-based sulfur tracer approach. Our simulation results suggest that the RM F(inf) estimates are minimally impacted by measurement error. In addition, the average light scattering response per unit mass concentration was greater indoors than outdoors; after correcting for differences in light scattering response the median deviation from sulfur F(inf) was reduced from 15 to 11%. Thus, we have verified the RM applied to light scattering data. We show that the RM method is unable to provide satisfactory estimates of the individual components of F(inf) (penetration efficiency, air exchange rate, and deposition rate). However, this approach may allow F(inf) to be estimated in more residences, including those with indoor sources of sulfur. We show that individual homes vary in their infiltration efficiencies, thereby contributing to exposure misclassification in epidemiological studies that assign exposures using ambient monitoring data. This variation across homes indicates the need for home-specific estimation methods, such as the RM or sulfur

  8. Recursive Focal Plane Wavefront and Bias Estimation for the Direct Imaging of Exoplanets

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Groff, Tyler Dean

    2016-01-01

    To image the reflected light from exoplanets and disks, an instrument must suppress diffracted starlight by about nine orders of magnitude. A coronagraph alters the stellar PSF to create regions of high contrast, but it is extremely sensitive to wavefront aberrations. Deformable mirrors (DMs) are necessary to mitigate these quasi-static aberrations and recover high-contrast. To avoid non-common path aberrations, the science camera must be used as the primary wavefront sensor. Focal plane wavefront correction is an iterative process, and obtaining sufficient signal in the dark holes requires long exposure times. The fastest coronagraphic wavefront correction techniques require estimates of the stellar electric field. The main challenge of coronagraphy is thus to perform complex wavefront estimation quickly and efficiently using intensity images from the camera. The most widely applicable and tested technique is DM Diversity, in which a DM modulates the focal plane intensity and several images are used to reconstruct the stellar electric field in a batch process. At the High Contrast Imaging Lab (HCIL) at Princeton, we have developed an iterative extended Kalman filter (IEKF) to improve upon this technique. The IEKF enables recursive starlight estimation and can utilize fewer images per iteration, thereby speeding up wavefront correction. This IEKF formulation also estimates the bias in the images recursively. Since exoplanets and disks are embedded in the incoherent bias signal, the IEKF enables detection of science targets during wavefront correction. Here we present simulated and experimental results from Princeton's HCIL demonstrating the effectiveness of the IEKF for recursive electric field estimation and exoplanet detection.

  9. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  10. Recursive state estimation for discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Ding, Derui; Shen, Yuxuan; Song, Yan; Wang, Yongxiong

    2016-07-01

    This paper is concerned with the state estimation problem for a class of discrete time-varying stochastic nonlinear systems with randomly occurring deception attacks. The stochastic nonlinearity described by statistical means which covers several classes of well-studied nonlinearities as special cases is taken into discussion. The randomly occurring deception attacks are modelled by a set of random variables obeying Bernoulli distributions with given probabilities. The purpose of the addressed state estimation problem is to design an estimator with hope to minimize the upper bound for estimation error covariance at each sampling instant. Such an upper bound is minimized by properly designing the estimator gain. The proposed estimation scheme in the form of two Riccati-like difference equations is of a recursive form. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed scheme.

  11. A recursive regularization algorithm for estimating the particle size distribution from multiangle dynamic light scattering measurements

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yang, Kecheng; Li, Wei; Wang, Wanyan; Guo, Wenping; Xia, Min

    2016-07-01

    Conventional regularization methods have been widely used for estimating particle size distribution (PSD) in single-angle dynamic light scattering, but they could not be used directly in multiangle dynamic light scattering (MDLS) measurements for lack of accurate angular weighting coefficients, which greatly affects the PSD determination and none of the regularization methods perform well for both unimodal and multimodal distributions. In this paper, we propose a recursive regularization method-Recursion Nonnegative Tikhonov-Phillips-Twomey (RNNT-PT) algorithm for estimating the weighting coefficients and PSD from MDLS data. This is a self-adaptive algorithm which distinguishes characteristics of PSDs and chooses the optimal inversion method from Nonnegative Tikhonov (NNT) and Nonnegative Phillips-Twomey (NNPT) regularization algorithm efficiently and automatically. In simulations, the proposed algorithm was able to estimate the PSDs more accurately than the classical regularization methods and performed stably against random noise and adaptable to both unimodal and multimodal distributions. Furthermore, we found that the six-angle analysis in the 30-130° range is an optimal angle set for both unimodal and multimodal PSDs.

  12. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights

    PubMed Central

    Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal

    2015-01-01

    Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261

  13. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  14. Bibliography for aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.; Maine, Richard E.

    1986-01-01

    An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.

  15. Recursive Bayesian filtering framework for lithium-ion cell state estimation

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Gambhire, Priya; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-02-01

    Robust battery management system is critical for a safe and reliable electric vehicle operation. One of the most important functions of the battery management system is to accurately estimate the battery state using minimal on-board instrumentation. This paper presents a recursive Bayesian filtering framework for on-board battery state estimation by assimilating measurables like cell voltage, current and temperature with physics-based reduced order model (ROM) predictions. The paper proposes an improved Particle filtering algorithm for implementation of the framework, and compares its performance against the unscented Kalman filter. Functionality of the proposed framework is demonstrated for a commercial NCA/C cell state estimation at different operating conditions including constant current discharge at room and low temperatures, hybrid power pulse characterization (HPPC) and urban driving schedule (UDDS) protocols. In addition to accurate voltage prediction, the electrochemical nature of ROM enables drawing of physical insights into the cell behavior. Advantages of using electrode concentrations over conventional Coulomb counting for accessible capacity estimation are discussed. In addition to the mean state estimation, the framework also provides estimation of the associated confidence bounds that are used to establish predictive capability of the proposed framework.

  16. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  17. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1999-01-01

    A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time.

  18. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  19. On the structural limitations of recursive digital filters for base flow estimation

    NASA Astrophysics Data System (ADS)

    Su, Chun-Hsu; Costelloe, Justin F.; Peterson, Tim J.; Western, Andrew W.

    2016-06-01

    Recursive digital filters (RDFs) are widely used for estimating base flow from streamflow hydrographs, and various forms of RDFs have been developed based on different physical models. Numerical experiments have been used to objectively evaluate their performance, but they have not been sufficiently comprehensive to assess a wide range of RDFs. This paper extends these studies to understand the limitations of a generalized RDF method as a pathway for future field calibration. Two formalisms are presented to generalize most existing RDFs, allowing systematic tuning of their complexity. The RDFs with variable complexity are evaluated collectively in a synthetic setting, using modeled daily base flow produced by Li et al. (2014) from a range of synthetic catchments simulated with HydroGeoSphere. Our evaluation reveals that there are optimal RDF complexities in reproducing base flow simulations but shows that there is an inherent physical inconsistency within the RDF construction. Even under the idealized setting where true base flow data are available to calibrate the RDFs, there is persistent disagreement between true and estimated base flow over catchments with small base flow components, low saturated hydraulic conductivity of the soil and larger surface runoff. The simplest explanation is that low base flow "signal" in the streamflow data is hard to distinguish, although more complex RDFs can improve upon the simpler Eckhardt filter at these catchments.

  20. Parameter estimation in food science.

    PubMed

    Dolan, Kirk D; Mishra, Dharmendra K

    2013-01-01

    Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.

  1. User's Guide for the Precision Recursive Estimator for Ephemeris Refinement (PREFER)

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.

    1982-01-01

    PREFER is a recursive orbit determination program which is used to refine the ephemerides produced by a batch least squares program (e.g., GTDS). It is intended to be used primarily with GTDS and, thus, is compatible with some of the GTDS input/output files.

  2. Data requirements for using combined conductivity mass balance and recursive digital filter method to estimate groundwater recharge in a small watershed, New Brunswick, Canada

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Xing, Zisheng; Danielescu, Serban; Li, Sheng; Jiang, Yefang; Meng, Fan-Rui

    2014-04-01

    Estimation of baseflow and groundwater recharge rates is important for hydrological analysis and modelling. A new approach which combines recursive digital filter (RDF) model with conductivity mass balance (CMB) method was considered to be reliable for baseflow separation because the combined method takes advantages of the reduced data requirement for RDF method and the reliability of CMB method. However, it is not clear what the minimum data requirements for producing acceptable estimates of the RDF model parameters are. In this study, 19-year record of stream discharge and water conductivity collected from the Black Brook Watershed (BBW), NB, Canada were used to test the combined baseflow separation method and assess the variability of parameters in the model over seasons. The data requirements and potential bias in estimated baseflow index (BFI) were evaluated using conductivity data for different seasons and/or resampled data segments at various sampling durations. Results indicated that the data collected during ground-frozen season are more suitable to estimate baseflow conductivity (Cbf) and data during snow-melting period are more suitable to estimate runoff conductivity (Cro). Relative errors of baseflow estimation were inversely proportional to the number of conductivity data records. A minimum of six-month discharge and conductivity data is required to obtain reliable parameters for current method with acceptable errors. We further found that the average annual recharge rate for the BBW was 322 mm in the past twenty years.

  3. Parameter Estimation Using VLA Data

    NASA Astrophysics Data System (ADS)

    Venter, Willem C.

    The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters

  4. A landscape-based cluster analysis using recursive search instead of a threshold parameter.

    PubMed

    Gladwin, Thomas E; Vink, Matthijs; Mars, Roger B

    2016-01-01

    Cluster-based analysis methods in neuroimaging provide control of whole-brain false positive rates without the need to conservatively correct for the number of voxels and the associated false negative results. The current method defines clusters based purely on shapes in the landscape of activation, instead of requiring the choice of a statistical threshold that may strongly affect results. Statistical significance is determined using permutation testing, combining both size and height of activation. A method is proposed for dealing with relatively small local peaks. Simulations confirm the method controls the false positive rate and correctly identifies regions of activation. The method is also illustrated using real data. •A landscape-based method to define clusters in neuroimaging data avoids the need to pre-specify a threshold to define clusters.•The implementation of the method works as expected, based on simulated and real data.•The recursive method used for defining clusters, the method used for combining clusters, and the definition of the "value" of a cluster may be of interest for future variations.

  5. A landscape-based cluster analysis using recursive search instead of a threshold parameter.

    PubMed

    Gladwin, Thomas E; Vink, Matthijs; Mars, Roger B

    2016-01-01

    Cluster-based analysis methods in neuroimaging provide control of whole-brain false positive rates without the need to conservatively correct for the number of voxels and the associated false negative results. The current method defines clusters based purely on shapes in the landscape of activation, instead of requiring the choice of a statistical threshold that may strongly affect results. Statistical significance is determined using permutation testing, combining both size and height of activation. A method is proposed for dealing with relatively small local peaks. Simulations confirm the method controls the false positive rate and correctly identifies regions of activation. The method is also illustrated using real data. •A landscape-based method to define clusters in neuroimaging data avoids the need to pre-specify a threshold to define clusters.•The implementation of the method works as expected, based on simulated and real data.•The recursive method used for defining clusters, the method used for combining clusters, and the definition of the "value" of a cluster may be of interest for future variations. PMID:27489780

  6. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, lambda-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients

  7. An Empirical Comparison between Two Recursive Filters for Attitude and Rate Estimation of Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    2006-01-01

    The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.

  8. Spatial join optimization among WFSs based on recursive partitioning and filtering rate estimation

    NASA Astrophysics Data System (ADS)

    Lan, Guiwen; Wu, Congcong; Shi, Guangyi; Chen, Qi; Yang, Zhao

    2015-12-01

    Spatial join among Web Feature Services (WFS) is time-consuming for most of non-candidate spatial objects may be encoded by GML and transferred to client side. In this paper, an optimization strategy is proposed to enhance performance of these joins by filtering non-candidate spatial objects as many as possible. By recursive partitioning, the data skew of sub-areas is facilitated to reduce data transmission using spatial semi-join. Moreover filtering rate is used to determine whether a spatial semi-join for a sub-area is profitable and choose a suitable execution plan for it. The experimental results show that the proposed strategy is feasible under most circumstances.

  9. Expectation-Maximization Algorithm Based System Identification of Multiscale Stochastic Models for Scale Recursive Estimation of Precipitation: Application to Model Validation and Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Gupta, R.; Venugopal, V.; Foufoula-Georgiou, E.

    2003-12-01

    Owing to the tremendous scale dependent variability of precipitation and discrepancies in scale or resolution among different types/sources of observations, comparing or merging observations at different scales, or validating Quantitative Precipitation Forecast (QPF) with observations is not trivial. Traditional methods of QPF (e.g., point to area) have been found deficient, and to alleviate some of the concerns, a new methodology called scale-recursive estimation (SRE) was introduced recently. This method, which has its root in Kalman filtering, can (i) handle disparate (in scale) measurement sources; (ii) account for observational uncertainty associated with each sensor; and (iii) incorporate a multiscale model (theoretical or empirical) which captures the observed scale-to-scale variability in precipitation. The result is an optimal (unbiased and minimum error variance) estimate at any desired scale along with its error statistics. Our preliminary studies have indicated that lognormal and bounded lognormal multiplicative cascades are the most successful candidates as state-propagation models for precipitation across a range of scales. However, the parameters of these models were found to be highly sensitive to the observed intermittency of precipitation fields. To address this problem, we have chosen to take a "system identification" approach instead of prescribing a priori the type of multiscale model. The first part of this work focuses on the use of Maximum Likelihood (ML) identification for estimating the parameters of a multiscale stochastic state space model directly from the given data. Expectation-Maximization (EM) algorithm is used to iteratively solve for ML estimates. The "expectation" step makes use of a Kalman smoother to estimate the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. Using high resolution forecast precipitation fields from ARPS (Advanced Regional Prediction System), concurrent

  10. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  11. Recursion Mathematics.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1989-01-01

    Discusses the use of the recursive method to permutations of n objects and a problem making c cents in change using pennies and nickels when order is important. Presents a LOGO program for the examples. (YP)

  12. Recursive estimation methods for tracking of localized perturbations in absorption using diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Hamdi, Amine; Miller, Eric L.; Boas, David; Franceschini, Maria A.; Kilmer, Misha E.

    2005-03-01

    Analysis of the quasi-sinusoidal temporal signals measured by a Diffuse Optical Tomography (DOT) instrument can be used to determine both quantitative and qualitative characteristics of functional brain activities arising from visual and auditory simulations, motor activities, and cognitive tasks performances. Once the activated regions in the brain are resolved using DOT, the temporal resolution of this modality is such that one can track the spatial evolution (both the location and morphology) of these regions with time. In this paper, we explore a state-estimation approach using Extended Kalman Filters to track the dynamics of functionally activated brain regions. We develop a model to determine the size, shape, location and contrast of an area of activity as a function of time. Under the assumption that previously acquired MRI data has provided us with a segmentation of the brain, we restrict the location of the area of functional activity to the thin, cortical sheet. To describe the geometry of the region, we employ a mathematical model in which the projection of the area of activity onto the plane of the sensors is assumed to be describable by a low dimensional algebraic curve. In this study, we consider in detail the case where the perturbations in optical absorption parameters arising due to activation are confined to independent regions in the cortex layer. We estimate the geometric parameters (axis lengths, rotation angle, center positions) defining the best fit ellipse for the activation area's projection onto the source-detector plane. At a single point in time, an adjoint field-based nonlinear inversion routine is used to extract the activated area's information. Examples of the utility of the method will be shown using synthetic data.

  13. On recursion

    PubMed Central

    Watumull, Jeffrey; Hauser, Marc D.; Roberts, Ian G.; Hornstein, Norbert

    2014-01-01

    It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive. PMID

  14. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  15. An efficient on-line thermal input estimation method using Kalman filter and recursive least square algorithm

    SciTech Connect

    Tuan, P.C.; Lee, S.C.; Hou, W.T.

    1997-07-01

    The efficient on-line thermal unknowns estimation using the Kalman filter and recursive least square with forgetting weighting algorithm is presented. The efficiency is dominated by the best choice of the forgetting factor under different scales of covariance of process and measurement noise. In this paper the roots mean square error is mainly used as the performance index to discuss the role and effect of forgetting factor. Simultaneously, the performances of the proposed algorithm in time domain and in frequency domain of the estimation are also discussed. In summary, an rigorously efficient robust forgetting factor zone, which provides a excellent tracking time-lag and noise filtered estimation result, is introduced. This zone is applicable to any type of time varied thermal unknown functions in the Inverse Heat Conduction Problem (IHCP) and suitable for the hardware loop realization under the global uncertainties. In addition, the thermal diffusion lag, is also discussed and compensated in this paper. The superior results are verified through one dimensional IHCP simulation.

  16. RAINFALL-LOSS PARAMETER ESTIMATION FOR ILLINOIS.

    USGS Publications Warehouse

    Weiss, Linda S.; Ishii, Audrey

    1986-01-01

    The U. S. Geological Survey is currently conducting an investigation to estimate values of parameters for two rainfall-loss computation methods used in a commonly used flood-hydrograph model. Estimates of six rainfall-loss parameters are required: four for the Exponential Loss-Rate method and two for the Initial and Uniform Loss-Rate method. Multiple regression analyses on calibrated data from 616 storms at 98 gaged basins are being used to develop parameter-estimating techniques for these six parameters at ungaged basins in Illinois. Parameter-estimating techniques are being verified using data from a total of 105 storms at 35 uncalibrated gaged basins.

  17. Method for estimating solubility parameter

    NASA Technical Reports Server (NTRS)

    Lawson, D. D.; Ingham, J. D.

    1973-01-01

    Semiempirical correlations have been developed between solubility parameters and refractive indices for series of model hydrocarbon compounds and organic polymers. Measurement of intermolecular forces is useful for assessment of material compatibility, glass-transition temperature, and transport properties.

  18. Valid lower bound for all estimators in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2016-09-01

    The widely used quantum Cramér–Rao bound (QCRB) sets a lower bound for the mean square error of unbiased estimators in quantum parameter estimation, however, in general QCRB is only tight in the asymptotical limit. With a limited number of measurements biased estimators can have a far better performance for which QCRB cannot calibrate. Here we introduce a valid lower bound for all estimators, either biased or unbiased, which can serve as standard of merit for all quantum parameter estimations.

  19. Parameter estimation by genetic algorithms

    SciTech Connect

    Reese, G.M.

    1993-11-01

    Test/Analysis correlation, or structural identification, is a process of reconciling differences in the structural dynamic models constructed analytically (using the finite element (FE) method) and experimentally (from modal test). This is a methodology for assessing the reliability of the computational model, and is very important in building models of high integrity, which may be used as predictive tools in design. Both the analytic and experimental models evaluate the same quantities: the natural frequencies (or eigenvalues, ({omega}{sub i}), and the mode shapes (or eigenvectors, {var_phi}). In this paper, selected frequencies are reconciled in the two models by modifying physical parameters in the FE model. A variety of parameters may be modified such as the stiffness of a joint member or the thickness of a plate. Engineering judgement is required to identify important frequencies, and to characterize the uncertainty of the model design parameters.

  20. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  1. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  2. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles.

    PubMed

    Nam, Kanghyun

    2015-11-11

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.

  3. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles

    PubMed Central

    Nam, Kanghyun

    2015-01-01

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246

  4. Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles.

    PubMed

    Nam, Kanghyun

    2015-01-01

    This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246

  5. Estimation of ground motion parameters

    USGS Publications Warehouse

    Boore, David M.; Joyner, W.B.; Oliver, A.A.; Page, R.A.

    1978-01-01

    Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. A subset of the data (from the San Fernando earthquake) is used to assess the effects of structural size and of geologic site conditions on peak motions recorded at the base of structures. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. The peak acceleration tends to b3e less and the peak velocity and displacement tend to be greater on the average at the base of large structures than at the base of small structures. In the distance range used in the regression analysis (15-100 km) the values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Some consideration is given to the prediction of ground motions at close distances where there are insufficient recorded data points. As might be expected from the lack of data, published relations for predicting peak horizontal acceleration give widely divergent estimates at close distances (three well known relations predict accelerations between 0.33 g to slightly over 1 g at a distance of 5 km from a magnitude 6.5 earthquake). After considering the physics of the faulting process, the few available data close to faults, and the modifying effects of surface topography, at the present time it would be difficult to accept estimates less than about 0.8 g, 110 cm/s, and 40 cm, respectively, for the mean values of peak acceleration, velocity, and displacement at rock sites

  6. Estimating random signal parameters from noisy images with nuisance parameters

    PubMed Central

    Whitaker, Meredith Kathryn; Clarkson, Eric; Barrett, Harrison H.

    2008-01-01

    In a pure estimation task, an object of interest is known to be present, and we wish to determine numerical values for parameters that describe the object. This paper compares the theoretical framework, implementation method, and performance of two estimation procedures. We examined the performance of these estimators for tasks such as estimating signal location, signal volume, signal amplitude, or any combination of these parameters. The signal is embedded in a random background to simulate the effect of nuisance parameters. First, we explore the classical Wiener estimator, which operates linearly on the data and minimizes the ensemble mean-squared error. The results of our performance tests indicate that the Wiener estimator can estimate amplitude and shape once a signal has been located, but is fundamentally unable to locate a signal regardless of the quality of the image. Given these new results on the fundamental limitations of Wiener estimation, we extend our methods to include more complex data processing. We introduce and evaluate a scanning-linear estimator that performs impressively for location estimation. The scanning action of the estimator refers to seeking a solution that maximizes a linear metric, thereby requiring a global-extremum search. The linear metric to be optimized can be derived as a special case of maximum a posteriori (MAP) estimation when the likelihood is Gaussian and a slowly varying covariance approximation is made. PMID:18545527

  7. ESTIM: A parameter estimation computer program: Final report

    SciTech Connect

    Hills, R.G.

    1987-08-01

    The computer code, ESTIM, enables subroutine versions of existing simulation codes to be used to estimate model parameters. Nonlinear least squares techniques are used to find the parameter values that result in a best fit between measurements made in the simulation domain and the simulation code's prediction of these measurements. ESTIM utilizes the non-linear least square code DQED (Hanson and Krogh (1982)) to handle the optimization aspects of the estimation problem. In addition to providing weighted least squares estimates, ESTIM provides a propagation of variance analysis. A subroutine version of COYOTE (Gartling (1982)) is provided. The use of ESTIM with COYOTE allows one to estimate the thermal property model parameters that result in the best agreement (in a least squares sense) between internal temperature measurements and COYOTE's predictions of these internal temperature measurements. We demonstrate the use of ESTIM through several example problems which utilize the subroutine version of COYOTE.

  8. Monte Carlo method for adaptively estimating the unknown parameters and the dynamic state of chaotic systems

    NASA Astrophysics Data System (ADS)

    Mariño, Inés P.; Míguez, Joaquín; Meucci, Riccardo

    2009-05-01

    We propose a Monte Carlo methodology for the joint estimation of unobserved dynamic variables and unknown static parameters in chaotic systems. The technique is sequential, i.e., it updates the variable and parameter estimates recursively as new observations become available, and, hence, suitable for online implementation. We demonstrate the validity of the method by way of two examples. In the first one, we tackle the estimation of all the dynamic variables and one unknown parameter of a five-dimensional nonlinear model using a time series of scalar observations experimentally collected from a chaotic CO2 laser. In the second example, we address the estimation of the two dynamic variables and the phase parameter of a numerical model commonly employed to represent the dynamics of optoelectronic feedback loops designed for chaotic communications over fiber-optic links.

  9. Estimation of ground motion parameters

    USGS Publications Warehouse

    Boore, David M.; Oliver, Adolph A.; Page, Robert A.; Joyner, William B.

    1978-01-01

    Strong motion data from western North America for earthquakes of magnitude greater than 5 are examined to provide the basis for estimating peak acceleration, velocity, displacement, and duration as a function of distance for three magnitude classes. Data from the San Fernando earthquake are examined to assess the effects of associated structures and of geologic site conditions on peak recorded motions. Small but statistically significant differences are observed in peak values of horizontal acceleration, velocity, and displacement recorded on soil at the base of small structures compared with values recorded at the base of large structures. Values of peak horizontal acceleration recorded at soil sites in the San Fernando earthquake are not significantly different from the values recorded at rock sites, but values of peak horizontal velocity and displacement are significantly greater at soil sites than at rock sites. Three recently published relationships for predicting peak horizontal acceleration are compared and discussed. Considerations are reviewed relevant to ground motion predictions at close distances where there are insufficient recorded data points.

  10. Estimation for large non-centrality parameters

    NASA Astrophysics Data System (ADS)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  11. Parameter estimation for a dual-rate system with time delay.

    PubMed

    Chen, Lei; Han, Lili; Huang, Biao; Liu, Fei

    2014-09-01

    This paper investigates the parameter estimation problem of the dual-rate system with time delay. The slow-rate model of the dual-rate system with time delay is derived by using the discretization technique. The parameters and states of the system are simultaneously estimated. The states are estimated by using the Kalman filter, and the parameters are estimated based on the stochastic gradient algorithm or the recursive least squares algorithm. When concerning state estimate of the dual-rate system with time delay, the state augmentation method is employed with lower computational load than that of the conventional one. Simulation examples and an experimental study are given to illustrate the proposed algorithm.

  12. Parameter estimation of qubit states with unknown phase parameter

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2015-02-01

    We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.

  13. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  14. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476

  15. Rules for selecting the parameters of Oustaloup recursive approximation for the simulation of linear feedback systems containing PI λ D μ controller

    NASA Astrophysics Data System (ADS)

    Merrikh-Bayat, Farshad

    2012-04-01

    Oustaloup recursive approximation (ORA) is widely used to find a rational integer-order approximation for fractional-order integrators and differentiators of the form sv, v ∈ (-1, 1). In this method the lower bound, the upper bound and the order of approximation should be determined beforehand, which is currently performed by trial and error and may be inefficient in some cases. The aim of this paper is to provide efficient rules for determining the suitable value of these parameters when a fractional-order PID controller is used in a stable linear feedback system. Two numerical examples are also presented to confirm the effectiveness of the proposed formulas.

  16. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  17. Autonomous terrain parameter estimation for wheeled vehicles

    NASA Astrophysics Data System (ADS)

    Ray, Laura E.

    2008-04-01

    This paper reports a methodology for inferring terrain parameters from estimated terrain forces in order to allow wheeled autonomous vehicles to assess mobility in real-time. Terrain force estimation can be used to infer the ability to accelerate, climb, or tow a load independent of the underlying terrain model. When a terrain model is available, physical soil properties and stress distribution parameters that relate to mobility are inferred from vehicle-terrain forces using multiple-model estimation. The approach uses Bayesian statistics to select the most likely terrain parameters from a set of hypotheses, given estimated terrain forces. The hypotheses are based on the extensive literature of soil properties for soils with cohesions from 1 - 70 kPa. Terrain parameter estimation is subject to mathematical uniqueness of the net forces resulting from vehicle-terrain interaction for a given set of terrain parameters; uniqueness properties are characterized in the paper motivating the approach. Terrain force and parameter estimation requires proprioceptive sensors - accelerometers, rate gyros, wheel speeds, motor currents, and ground speed. Simulation results demonstrate efficacy of the method on three terrains - low cohesion sand, sandy loam, and high cohesion clay, with parameter convergence times as low as .02 sec. The method exhibits an ability to interpolate between hypotheses when no single hypothesis adequately characterizes the terrain.

  18. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, M.; Fienen, M.N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.

  19. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, Marco D.; Fienen, Michael N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  20. MODFLOW-Style parameters in underdetermined parameter estimation.

    PubMed

    D'Oria, Marco; Fienen, Michael N

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. PMID:21352210

  1. Reionization history and CMB parameter estimation

    SciTech Connect

    Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y. E-mail: gnedin@fnal.edu

    2013-05-01

    We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.

  2. GEODYN- ORBITAL AND GEODETIC PARAMETER ESTIMATION

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation program, GEODYN, possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fits the entire set of estimation parameters. The estimation problem can be divided into two parts: the orbit prediction problem, and the parameter estimation problem. GEODYN solves these two problems by employing Cowell's method for integrating the orbit and a Bayesian least squares statistical estimation procedure for parameter estimation. GEODYN has found a wide range of applications including determination of definitive orbits, tracking instrumentation calibration, satellite operational predictions, and geodetic parameter estimation, such as the estimations for global networks of tracking stations. The orbit prediction problem may be briefly described as calculating for some later epoch the new conditions of state for the satellite, given a set of initial conditions of state for some epoch, and the disturbing forces affecting the motion of the satellite. The user is required to supply only the initial conditions of state and GEODYN will provide the forcing function and integrate the equations of motion of the satellite. Additionally, GEODYN performs time and coordinate transformations to insure the continuity of operations. Cowell's method of numerical integration is used to solve the satellite equations of motion and the variational partials for force model parameters which are to be adjusted. This method uses predictor-corrector formulas for the equations of motion and corrector formulas only for the variational partials. The parameter estimation problem is divided into three separate parts: 1) instrument measurement modeling and partial derivative computation, 2) data error correction, and 3) statistical estimation of the parameters. Since all of the measurements modeled by

  3. Frequency tracking and parameter estimation for robust quantum state estimation

    SciTech Connect

    Ralph, Jason F.; Jacobs, Kurt; Hill, Charles D.

    2011-11-15

    In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.

  4. Estimation of saxophone reed parameters during playing.

    PubMed

    Muñoz Arancón, Alberto; Gazengel, Bruno; Dalmont, Jean-Pierre; Conan, Ewen

    2016-05-01

    An approach for the estimation of single reed parameters during playing, using an instrumented mouthpiece and an iterative method, is presented. Different physical models describing the reed tip movement are tested in the estimation method. The uncertainties of the sensors installed on the mouthpiece and the limits of the estimation method are studied. A tenor saxophone reed is mounted on this mouthpiece connected to a cylinder, played by a musician, and characterized at different dynamic levels. Results show that the method can be used to estimate the reed parameters with a small error for low and medium sound levels (piano and mezzoforte dynamic levels). The analysis reveals that the complexity of the physical model describing the reed behavior must increase with dynamic levels. For medium level dynamics, the most relevant physical model assumes that the reed is an oscillator with non-linear stiffness and damping, the effect of mass (inertia) being very small. PMID:27250168

  5. LISA Parameter Estimation using Numerical Merger Waveforms

    NASA Technical Reports Server (NTRS)

    Thorpe, J. I.; McWilliams, S.; Baker, J.

    2008-01-01

    Coalescing supermassive black holes are expected to provide the strongest sources for gravitational radiation detected by LISA. Recent advances in numerical relativity provide a detailed description of the waveforms of such signals. We present a preliminary study of LISA's sensitivity to waveform parameters using a hybrid numerical/analytic waveform describing the coalescence of two equal-mass, nonspinning black holes. The Synthetic LISA software package is used to simulate the instrument response and the Fisher information matrix method is used to estimate errors in the waveform parameters. Initial results indicate that inclusion of the merger signal can significantly improve the precision of some parameter estimates. For example, the median parameter errors for an ensemble of systems with total redshifted mass of 10(exp 6) deg M solar mass at a redshift of z is approximately 1 were found to decrease by a factor of slightly more than two when the merger was included.

  6. Robot arm geometric link parameter estimation

    NASA Astrophysics Data System (ADS)

    Hayati, S. A.

    A general method for estimating serial link manipulator geometric parameter errors is proposed in this paper. The positioning accuracy of the end-effector may be increased significantly by updating the nominal link parameters in the control software to represent the physical system more accurately. The proposed method is applicable for serial link manipulators with any combination of revolute or prismatic joints, and is not limited to a specific measurement technique.

  7. Regularized estimation of Euler pole parameters

    NASA Astrophysics Data System (ADS)

    Aktuğ, Bahadir; Yildirim, Ömer

    2013-07-01

    Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.

  8. A novel multistage estimation of signal parameters

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1990-01-01

    A multistage estimation scheme is presented for estimating the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc. Such a situation arises, for example, in the case of the Global Positioning Systems (GPS). In the proposed scheme, the first-stage estimator operates as a coarse estimator of the frequency and its derivatives, resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency (an event termed cycle slip). The second stage of the estimator operates on the error signal available from the first stage, refining the overall estimates, and in the process also reduces the number of cycle slips. The first-stage algorithm is a modified least-squares algorithm operating on the differential signal model and referred to as differential least squares (DLS). The second-stage algorithm is an extended Kalman filter, which yields the estimate of the phase as well as refining the frequency estimate. A major advantage of the is a reduction in the threshold for the received carrier power-to-noise power spectral density ratio (CNR) as compared with the threshold achievable by either of the algorithms alone.

  9. ZASPE: Zonal Atmospheric Stellar Parameters Estimator

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordan, Andres; Hartman, Joel; Bakos, Gaspar

    2016-07-01

    ZASPE (Zonal Atmospheric Stellar Parameters Estimator) computes the atmospheric stellar parameters (Teff, log(g), [Fe/H] and vsin(i)) from echelle spectra via least squares minimization with a pre-computed library of synthetic spectra. The minimization is performed only in the most sensitive spectral zones to changes in the atmospheric parameters. The uncertainities and covariances computed by ZASPE assume that the principal source of error is the systematic missmatch between the observed spectrum and the sythetic one that produces the best fit. ZASPE requires a grid of synthetic spectra and can use any pre-computed library minor modifications.

  10. New approaches to estimation of magnetotelluric parameters

    SciTech Connect

    Egbert, G.D.

    1991-01-01

    Fully efficient robust data processing procedures were developed and tested for single station and remote reference magnetotelluric (Mr) data. Substantial progress was made on development, testing and comparison of optimal procedures for single station data. A principal finding of this phase of the research was that the simplest robust procedures can be more heavily biased by noise in the (input) magnetic fields, than standard least squares estimates. To deal with this difficulty we developed a robust processing scheme which combined the regression M-estimate with coherence presorting. This hybrid approach greatly improves impedance estimates, particularly in the low signal-to-noise conditions often encountered in the dead band'' (0.1--0.0 hz). The methods, and the results of comparisons of various single station estimators are described in detail. Progress was made on developing methods for estimating static distortion parameters, and for testing hypotheses about the underlying dimensionality of the geological section.

  11. Estimating physiological skin parameters from hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Banerjee, Amit; Burlina, Philippe

    2013-05-01

    We describe an approach for estimating human skin parameters, such as melanosome concentration, collagen concentration, oxygen saturation, and blood volume, using hyperspectral radiometric measurements (signatures) obtained from in vivo skin. We use a computational model based on Kubelka-Munk theory and the Fresnel equations. This model forward maps the skin parameters to a corresponding multiband reflectance spectra. Machine-learning-based regression is used to generate the inverse map, and hence estimate skin parameters from hyperspectral signatures. We test our methods using synthetic and in vivo skin signatures obtained in the visible through the short wave infrared domains from 24 patients of both genders and Caucasian, Asian, and African American ethnicities. Performance validation shows promising results: good agreement with the ground truth and well-established physiological precepts. These methods have potential use in the characterization of skin abnormalities and in minimally-invasive prescreening of malignant skin cancers.

  12. Target parameter and error estimation using magnetometry

    NASA Astrophysics Data System (ADS)

    Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

  13. Cosmological parameter estimation: impact of CMB aberration

    SciTech Connect

    Catena, Riccardo; Notari, Alessio E-mail: notari@ffn.ub.es

    2013-04-01

    The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a{sub lm}'s via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.

  14. Multiple emitter location and signal parameter estimation

    NASA Astrophysics Data System (ADS)

    Schmidt, R. O.

    1986-03-01

    Multiple signal classification (MUSIC) techniques involved in determining the parameters of multiple wavefronts arriving at an antenna array are discussed. A MUSIC algorithm is described, which provides asymptotically unbiased estimates of (1) the number of signals, (2) directions of arrival (or emitter locations), (3) strengths and cross correlations among the incident waveforms, and (4) the strength of noise/interference. The example of the use of the algorithm as a multiple frequency estimator operating on time series is examined. Comparisons of this method with methods based on maximum likelihood and maximum entropy, as well as conventional beamforming, are presented.

  15. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests

  16. Rapid Compact Binary Coalescence Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Pankow, Chris; Brady, Patrick; O'Shaughnessy, Richard; Ochsner, Evan; Qi, Hong

    2016-03-01

    The first observation run with second generation gravitational-wave observatories will conclude at the beginning of 2016. Given their unprecedented and growing sensitivity, the benefit of prompt and accurate estimation of the orientation and physical parameters of binary coalescences is obvious in its coupling to electromagnetic astrophysics and observations. Popular Bayesian schemes to measure properties of compact object binaries use Markovian sampling to compute the posterior. While very successful, in some cases, convergence is delayed until well after the electromagnetic fluence has subsided thus diminishing the potential science return. With this in mind, we have developed a scheme which is also Bayesian and simply parallelizable across all available computing resources, drastically decreasing convergence time to a few tens of minutes. In this talk, I will emphasize the complementary use of results from low latency gravitational-wave searches to improve computational efficiency and demonstrate the capabilities of our parameter estimation framework with a simulated set of binary compact object coalescences.

  17. CosmoSIS: Modular cosmological parameter estimation

    SciTech Connect

    Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.

    2015-06-09

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis

  18. CosmoSIS: Modular cosmological parameter estimation

    DOE PAGES

    Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.

    2015-06-09

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less

  19. Renal parameter estimates in unrestrained dogs

    NASA Technical Reports Server (NTRS)

    Rader, R. D.; Stevens, C. M.

    1974-01-01

    A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.

  20. Generalized REGression Package for Nonlinear Parameter Estimation

    1995-05-15

    GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.

  1. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  2. A parameter estimation algorithm for spatial sine testing - Theory and evaluation

    NASA Technical Reports Server (NTRS)

    Rost, R. W.; Deblauwe, F.

    1992-01-01

    This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.

  3. Parameter estimation, model reduction and quantum filtering

    NASA Astrophysics Data System (ADS)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  4. Parameter estimation in tree graph metabolic networks

    PubMed Central

    Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  5. Parameter estimation in tree graph metabolic networks.

    PubMed

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  6. Parameter estimation in tree graph metabolic networks.

    PubMed

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings. PMID:27688960

  7. Parameter estimation in tree graph metabolic networks

    PubMed Central

    Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D.; Groenenboom, Marian; Molenaar, Jaap J.

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings. PMID:27688960

  8. Estimating Infiltration Parameters from Basic Soil Properties

    NASA Astrophysics Data System (ADS)

    van de Genachte, G.; Mallants, D.; Ramos, J.; Deckers, J. A.; Feyen, J.

    1996-05-01

    Infiltration data were collected on two rectangular grids with 25 sampling points each. Both experimental grids were located in tropical rain forest (Guyana), the first in an Arenosol area and the second in a Ferralsol field. Four different infiltration models were evaluated based on their performance in describing the infiltration data. The model parameters were estimated using non-linear optimization techniques. The infiltration behaviour in the Ferralsol was equally well described by the equations of Philip, Green-Ampt, Kostiakov and Horton. For the Arenosol, the equations of Philip, Green-Ampt and Horton were significantly better than the Kostiakov model. Basic soil properties such as textural composition (percentage sand, silt and clay), organic carbon content, dry bulk density, porosity, initial soil water content and root content were also determined for each sampling point of the two grids. The fitted infiltration parameters were then estimated based on other soil properties using multiple regression. Prior to the regression analysis, all predictor variables were transformed to normality. The regression analysis was performed using two information levels. The first information level contained only three texture fractions for the Ferralsol (sand, silt and clay) and four fractions for the Arenosol (coarse, medium and fine sand, and silt and clay). At the first information level the regression models explained up to 60% of the variability of some of the infiltration parameters for the Ferralsol field plot. At the second information level the complete textural analysis was used (nine fractions for the Ferralsol and six for the Arenosol). At the second information level a principal components analysis (PCA) was performed prior to the regression analysis to overcome the problem of multicollinearity among the predictor variables. Regression analysis was then carried out using the orthogonally transformed soil properties as the independent variables. Results for

  9. Recursive least-squares learning algorithms for neural networks

    SciTech Connect

    Lewis, P.S. ); Hwang, Jenq-Neng . Dept. of Electrical Engineering)

    1990-01-01

    This paper presents the development of a pair of recursive least squares (RLS) algorithms for online training of multilayer perceptrons, which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation, either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is in the order of (N{sup 2}), where N is the number of network parameters. This is due to the estimation of the N {times} N inverse Hessian matrix. Less computationally intensive approximations of the RLS algorithms can be easily derived by using only block diagonal elements of this matrix, thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example, RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6331). 14 refs., 3 figs.

  10. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  11. System and method for motor parameter estimation

    SciTech Connect

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  12. Parameter estimation with Sandage-Loeb test

    SciTech Connect

    Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin E-mail: jfzhang@mail.neu.edu.cn

    2014-12-01

    The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.

  13. Recursion, Language, and Starlings

    ERIC Educational Resources Information Center

    Corballis, Michael C.

    2007-01-01

    It has been claimed that recursion is one of the properties that distinguishes human language from any other form of animal communication. Contrary to this claim, a recent study purports to demonstrate center-embedded recursion in starlings. I show that the performance of the birds in this study can be explained by a counting strategy, without any…

  14. Estimation of high altitude Martian dust parameters

    NASA Astrophysics Data System (ADS)

    Pabari, Jayesh; Bhalodi, Pinali

    2016-07-01

    Dust devils are known to occur near the Martian surface mostly during the mid of Southern hemisphere summer and they play vital role in deciding background dust opacity in the atmosphere. The second source of high altitude Martian dust could be due to the secondary ejecta caused by impacts on Martian Moons, Phobos and Deimos. Also, the surfaces of the Moons are charged positively due to ultraviolet rays from the Sun and negatively due to space plasma currents. Such surface charging may cause fine grains to be levitated, which can easily escape the Moons. It is expected that the escaping dust form dust rings within the orbits of the Moons and therefore also around the Mars. One more possible source of high altitude Martian dust is interplanetary in nature. Due to continuous supply of the dust from various sources and also due to a kind of feedback mechanism existing between the ring or tori and the sources, the dust rings or tori can sustain over a period of time. Recently, very high altitude dust at about 1000 km has been found by MAVEN mission and it is expected that the dust may be concentrated at about 150 to 500 km. However, it is mystery how dust has reached to such high altitudes. Estimation of dust parameters before-hand is necessary to design an instrument for the detection of high altitude Martian dust from a future orbiter. In this work, we have studied the dust supply rate responsible primarily for the formation of dust ring or tori, the life time of dust particles around the Mars, the dust number density as well as the effect of solar radiation pressure and Martian oblateness on dust dynamics. The results presented in this paper may be useful to space scientists for understanding the scenario and designing an orbiter based instrument to measure the dust surrounding the Mars for solving the mystery. The further work is underway.

  15. The estimation of the constituent densities of the upper atmosphere by means of a recursive filtering algorithm.

    NASA Technical Reports Server (NTRS)

    Mcgarty, T. P.

    1971-01-01

    The structure of the upper atmosphere can be indirectly probed by light in order to determine the global density structure of ozone, aerosols, and neutral atmosphere. Scattered and directly transmitted light is measured by a satellite and is shown to be a nonlinear function of the state which is defined to be a point-wise decomposition of the density profiles. Dynamics are imposed on the state vector and a structured estimation problem is developed. The estimation of these densities is then performed using a linearized Kalman-Bucy filter and a linearized Kushner-Stratonovich filter.

  16. Parameter and state estimation for articulated heavy vehicles

    NASA Astrophysics Data System (ADS)

    Cheng, Caizhen; Cebon, David

    2011-02-01

    This article discusses algorithms to estimate parameters and states of articulated heavy vehicles. First, 3- and 5-degrees-of-freedom linear vehicle models of a tractor semitrailer are presented. Vehicle parameter estimation methods based on the dual extended Kalman filter and state estimation based on the Kalman filter are presented. A program of experimental tests on an instrumental heavy goods vehicle is described. Simulation and experimental results showed that the algorithms generate accurate estimates of vehicle parameters and states under most circumstances.

  17. Bayesian parameter estimation by continuous homodyne detection

    NASA Astrophysics Data System (ADS)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2016-09-01

    We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal contribute to the ultimate sensitivity limit of homodyne detection.

  18. Recursive Deadbeat Controller Design

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh Q.

    1997-01-01

    This paper presents a recursive algorithm for a deadbeat predictive controller design. The method combines together the concepts of system identification and deadbeat controller designs. It starts with the multi-step output prediction equation and derives the control force in terms of past input and output time histories. The formulation thus derived satisfies simultaneously system identification and deadbeat controller design requirements. As soon as the coefficient matrices are identified satisfying the output prediction equation, no further work is required to compute the deadbeat control gain matrices. The method can be implemented recursively just as any typical recursive system identification techniques.

  19. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  20. Noniterative estimation of a nonlinear parameter

    NASA Technical Reports Server (NTRS)

    Bergstroem, A.

    1973-01-01

    An algorithm is described which solves the parameters X = (x1,x2,...,xm) and p in an approximation problem Ax nearly equal to y(p), where the parameter p occurs nonlinearly in y. Instead of linearization methods, which require an approximate value of p to be supplied as a priori information, and which may lead to the finding of local minima, the proposed algorithm finds the global minimum by permitting the use of series expansions of arbitrary order, exploiting an a priori knowledge that the addition of a particular function, corresponding to a new column in A, will not improve the goodness of the approximation.

  1. Attitude determination and parameter estimation using vector observations - Theory

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.

  2. Estimating Geophysical Parameters From Gravity Data

    NASA Technical Reports Server (NTRS)

    Sjogren, William L.; Wimberly, Ravenel N.

    1988-01-01

    ORBSIM program developed for accurate extraction of parameters of geophysical models from Doppler-radio-tracking data acquired from orbiting planetary spacecraft. Model of proposed planetary structure used in numerical integration along simulated trajectories of spacecraft around primary body. Written in FORTRAN 77.

  3. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    NASA Astrophysics Data System (ADS)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify "Inertia Parameters (IPs)" of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), "active space-debris removal" or "automated in-orbit assemblies". In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new "modular formulation" has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a "Modular Set" of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for "simultaneous estimation processes" using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  4. Applications of parameter estimation in the study of spinning airplanes

    NASA Technical Reports Server (NTRS)

    W Taylor, L., Jr.

    1982-01-01

    Spinning airplanes offer challenges to estimating dynamic parameters because of the nonlinear nature of the dynamics. In this paper, parameter estimation techniques are applied to spin flight test data for estimating the error in measuring post-stall angles of attack, deriving Euler angles from angular velocity data, and estimating nonlinear aerodynamic characteristics. The value of the scale factor for post-stall angles of attack agrees closely with that obtained from special wind-tunnel tests. The independently derived Euler angles are seen to be valid in spite of steep pitch angles. Estimates of flight derived nonlinear aerodynamic parameters are evaluated in terms of the expected fit error.

  5. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  6. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    NASA Astrophysics Data System (ADS)

    Phanomchoeng, Gridsada

    presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.

  7. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  8. FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION

    SciTech Connect

    Rodney, Steven A.; Tonry, John L. E-mail: jt@ifa.hawaii.ed

    2010-05-20

    Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms{sub z} = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored {Lambda}CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.

  9. Estimating parameters for generalized mass action models with connectivity information

    PubMed Central

    Ko, Chih-Lung; Voit, Eberhard O; Wang, Feng-Sheng

    2009-01-01

    Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out on the constrained

  10. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  11. Estimation of raindrop size distribution parameters from a dual-parameter spaceborne radar measurement

    NASA Technical Reports Server (NTRS)

    Kozu, Toshiaki; Nakamura, Kenji; Meneghini, Robert

    1991-01-01

    A method to estimate raindrop size distribution (DSD) parameters from a combined Zm profile and path-integrated attenuation is shown, and a test result of the method using the data from an aircraft experiment is presented. The 'semi' dual-parameter (SDP) measurement is employed to estimate DSD parameters using the data obtained from an aircraft experiment conducted by Communications Research Laboratory, Tokyo, in conjunction with NASA. The validity of estimated DSD parameters is examined using measured Ka-band radar reflectivities. The estimated path-averaged N(0) is consistent with the Ka/X Ze ratio, and the use of estimated DSD shows excellent agreement between the rain rates estimated from the X-band and K-band Zes. The feasibility of estimating DSD parameters from space is confirmed.

  12. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  13. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  14. Multidimensional Item Response Theory Parameter Estimation with Nonsimple Structure Items

    ERIC Educational Resources Information Center

    Finch, Holmes

    2011-01-01

    Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…

  15. Complexity Analysis and Parameter Estimation of Dynamic Metabolic Systems

    PubMed Central

    Tian, Li-Ping; Shi, Zhong-Ke; Wu, Fang-Xiang

    2013-01-01

    A metabolic system consists of a number of reactions transforming molecules of one kind into another to provide the energy that living cells need. Based on the biochemical reaction principles, dynamic metabolic systems can be modeled by a group of coupled differential equations which consists of parameters, states (concentration of molecules involved), and reaction rates. Reaction rates are typically either polynomials or rational functions in states and constant parameters. As a result, dynamic metabolic systems are a group of differential equations nonlinear and coupled in both parameters and states. Therefore, it is challenging to estimate parameters in complex dynamic metabolic systems. In this paper, we propose a method to analyze the complexity of dynamic metabolic systems for parameter estimation. As a result, the estimation of parameters in dynamic metabolic systems is reduced to the estimation of parameters in a group of decoupled rational functions plus polynomials (which we call improper rational functions) or in polynomials. Furthermore, by taking its special structure of improper rational functions, we develop an efficient algorithm to estimate parameters in improper rational functions. The proposed method is applied to the estimation of parameters in a dynamic metabolic system. The simulation results show the superior performance of the proposed method. PMID:24233242

  16. Quantitative genetic models for describing simultaneous and recursive relationships between phenotypes.

    PubMed Central

    Gianola, Daniel; Sorensen, Daniel

    2004-01-01

    Multivariate models are of great importance in theoretical and applied quantitative genetics. We extend quantitative genetic theory to accommodate situations in which there is linear feedback or recursiveness between the phenotypes involved in a multivariate system, assuming an infinitesimal, additive, model of inheritance. It is shown that structural parameters defining a simultaneous or recursive system have a bearing on the interpretation of quantitative genetic parameter estimates (e.g., heritability, offspring-parent regression, genetic correlation) when such features are ignored. Matrix representations are given for treating a plethora of feedback-recursive situations. The likelihood function is derived, assuming multivariate normality, and results from econometric theory for parameter identification are adapted to a quantitative genetic setting. A Bayesian treatment with a Markov chain Monte Carlo implementation is suggested for inference and developed. When the system is fully recursive, all conditional posterior distributions are in closed form, so Gibbs sampling is straightforward. If there is feedback, a Metropolis step may be embedded for sampling the structural parameters, since their conditional distributions are unknown. Extensions of the model to discrete random variables and to nonlinear relationships between phenotypes are discussed. PMID:15280252

  17. Language and Recursion

    NASA Astrophysics Data System (ADS)

    Lowenthal, Francis

    2010-11-01

    This paper examines whether the recursive structure imbedded in some exercises used in the Non Verbal Communication Device (NVCD) approach is actually the factor that enables this approach to favor language acquisition and reacquisition in the case of children with cerebral lesions. For that a definition of the principle of recursion as it is used by logicians is presented. The two opposing approaches to the problem of language development are explained. For many authors such as Chomsky [1] the faculty of language is innate. This is known as the Standard Theory; the other researchers in this field, e.g. Bates and Elman [2], claim that language is entirely constructed by the young child: they thus speak of Language Acquisition. It is also shown that in both cases, a version of the principle of recursion is relevant for human language. The NVCD approach is defined and the results obtained in the domain of language while using this approach are presented: young subjects using this approach acquire a richer language structure or re-acquire such a structure in the case of cerebral lesions. Finally it is shown that exercises used in this framework imply the manipulation of recursive structures leading to regular grammars. It is thus hypothesized that language development could be favored using recursive structures with the young child. It could also be the case that the NVCD like exercises used with children lead to the elaboration of a regular language, as defined by Chomsky [3], which could be sufficient for language development but would not require full recursion. This double claim could reconcile Chomsky's approach with psychological observations made by adherents of the Language Acquisition approach, if it is confirmed by researches combining the use of NVCDs, psychometric methods and the use of Neural Networks. This paper thus suggests that a research group oriented towards this problematic should be organized.

  18. Estimating parameter of influenza transmission using regularized least square

    NASA Astrophysics Data System (ADS)

    Nuraini, N.; Syukriah, Y.; Indratno, S. W.

    2014-02-01

    Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.

  19. Parameter estimation in complex flows with chemical reactions

    NASA Astrophysics Data System (ADS)

    Robinson, Daniel J.

    The estimation of unknown parameters in engineering and scientific models continues to be of great importance in order to validate them to available experimental data. These parameters of concern cannot be known beforehand, but must be measured experimentally, variables such as chemical species concentrations, pressures, or temperatures as examples. Particularly, in chemically reacting flows, the estimation of kinetic rate parameters from experimentally determined values is in great demand and not well understood. New parameter optimization algorithms have been developed from a Gauss-Newton formulation for the estimation of reaction rate parameters in several different complex flow applications. A zero-dimensional parameter estimation methodology was used in conjunction with a parameter sensitivity study and then applied to three-dimensional flow models. This new parameter estimation technique was applied to three-dimensional models for chemical vapor deposition of silicon carbide and gallium arsenide semiconductor materials. The parameter estimation for silicon carbide for several different operating points was in close agreement to experiment. The parameter estimation for gallium arsenide proved to be very accurate, being within four percent of the experimental data. New parameter estimation algorithms were likewise created for a three-dimensional multiphase model for methanol spray combustion. The kinetic rate parameters delivered results in close agreement to experiment for profiles of combustion species products. In addition, a new parameter estimation method for the determination of spray droplet sizes and velocities is presented. The results for methanol combustion chemical species profiles are in good agreement to experiment for several different droplet sizes. Lastly, the parameter estimation method was extended to a bio-kinetic application, namely mitochondrial cells, that are cardiac or respiratory cells found in animals and humans. The results for the

  20. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  1. Dynamic noise, chaos and parameter estimation in population biology.

    PubMed

    Stollenwerk, N; Aguiar, M; Ballesteros, S; Boto, J; Kooi, B; Mateus, L

    2012-04-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models such as multi-strain dynamics to describe the virus-host interaction in dengue fever, even the most recently developed parameter estimation techniques, such as maximum likelihood iterated filtering, reach their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and the deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  2. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  3. Recursion in Aphasia

    ERIC Educational Resources Information Center

    Banreti, Zoltan

    2010-01-01

    This study investigates how aphasic impairment impinges on syntactic and/or semantic recursivity of human language. A series of tests has been conducted with the participation of five Hungarian speaking aphasic subjects and 10 control subjects. Photographs representing simple situations were presented to subjects and questions were asked about…

  4. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  5. Recursion, Computers and Art

    ERIC Educational Resources Information Center

    Kemp, Andy

    2007-01-01

    "Geomlab" is a functional programming language used to describe pictures that are made up of tiles. The beauty of "Geomlab" is that it introduces students to recursion, a very powerful mathematical concept, through a very simple and enticing graphical environment. Alongside the software is a series of eight worksheets which lead into producing…

  6. Recursively minimally-deformed oscillators

    NASA Astrophysics Data System (ADS)

    Katriel, J.; Quesne, C.

    1996-04-01

    A recursive deformation of the boson commutation relation is introduced. Each step consists of a minimal deformation of a commutator [a,a°]=fk(... ;n̂) into [a,a°]qk+1=fk(... ;n̂), where ... stands for the set of deformation parameters that fk depends on, followed by a transformation into the commutator [a,a°]=fk+1(...,qk+1;n̂) to which the deformed commutator is equivalent within the Fock space. Starting from the harmonic oscillator commutation relation [a,a°]=1 we obtain the Arik-Coon and Macfarlane-Biedenharn oscillators at the first and second steps, respectively, followed by a sequence of multiparameter generalizations. Several other types of deformed commutation relations related to the treatment of integrable models and to parastatistics are also obtained. The ``generic'' form consists of a linear combination of exponentials of the number operator, and the various recursive families can be classified according to the number of free linear parameters involved, that depends on the form of the initial commutator.

  7. Recursive Implementations of the Consider Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; DSouza, Chris

    2012-01-01

    One method to account for parameters errors in the Kalman filter is to consider their effect in the so-called Schmidt-Kalman filter. This work addresses issues that arise when implementing a consider Kalman filter as a real-time, recursive algorithm. A favorite implementation of the Kalman filter as an onboard navigation subsystem is the UDU formulation. A new way to implement a UDU consider filter is proposed. The non-optimality of the recursive consider filter is also analyzed, and a modified algorithm is proposed to overcome this limitation.

  8. Kalman filter data assimilation: Targeting observations and parameter estimation

    SciTech Connect

    Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  9. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  10. Parameter estimation in deformable models using Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Haynor, David R.; Sampson, Paul D.; Kim, Yongmin

    1997-04-01

    Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.

  11. Kalman filter application for distributed parameter estimation in reactor systems

    SciTech Connect

    Martin, R.P.; Edwards, R.M.

    1996-07-01

    An application of the Kalman filter has been developed for the real-time identification of a distributed parameter in a nuclear power plant. This technique can be used to improve numerical method-based best-estimate simulation of complex systems such as nuclear power plants. The application to a reactor system involves a unique modal model that approximates physical components, such as the reactor, as a coupled oscillator, i.e., a modal model with coupled modes. In this model both states and parameters are described by an orthogonal expansion. The Kalman filter with the sequential least-squares parameter estimation algorithm was used to estimate the modal coefficients of all states and one parameter. Results show that this state feedback algorithm is an effective way to parametrically identify a distributed parameter system in the presence of uncertainties.

  12. Estimating convective boundary layer parameters for diffusion applications. Final report

    SciTech Connect

    Weil, J.

    1983-04-01

    Simple methods are presented for estimating those boundary layer parameters most important in controlling turbulence and diffusion within the convective boundary layer (CBL). These parameters include: surface heat flux, friction velocity, mean wind speed, and boundary layer height. Emphasis is on estimation methods requiring only routinely available data such as may exist at local airports. We focus on the CBL because the main diffusion application of interest is tall stacks, which generally produce their highest ground-level concentrations during convective conditions.

  13. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application. PMID:19336320

  14. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  15. Generalized Limits for Single-Parameter Quantum Estimation

    SciTech Connect

    Boixo, Sergio; Flammia, Steven T.; Caves, Carlton M.; Geremia, JM

    2007-03-02

    We develop generalized bounds for quantum single-parameter estimation problems for which the coupling to the parameter is described by intrinsic multisystem interactions. For a Hamiltonian with k-system parameter-sensitive terms, the quantum limit scales as 1/N{sup k}, where N is the number of systems. These quantum limits remain valid when the Hamiltonian is augmented by any parameter-independent interaction among the systems and when adaptive measurements via parameter-independent coupling to ancillas are allowed.

  16. Simultaneous estimation of parameters in the bivariate Emax model.

    PubMed

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  17. A comparison of approximate interval estimators for the Bernoulli parameter

    NASA Technical Reports Server (NTRS)

    Leemis, Lawrence; Trivedi, Kishor S.

    1993-01-01

    The goal of this paper is to compare the accuracy of two approximate confidence interval estimators for the Bernoulli parameter p. The approximate confidence intervals are based on the normal and Poisson approximations to the binomial distribution. Charts are given to indicate which approximation is appropriate for certain sample sizes and point estimators.

  18. On the Nature of SEM Estimates of ARMA Parameters.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  19. A Simple Technique for Estimating Latent Trait Mental Test Parameters

    ERIC Educational Resources Information Center

    Jensema, Carl

    1976-01-01

    A simple and economical method for estimating initial parameter values for the normal ogive or logistic latent trait mental test model is outlined. The accuracy of the method in comparison with maximum likelihood estimation is investigated through the use of Monte-Carlo data. (Author)

  20. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  1. Synchronization-based parameter estimation from time series

    NASA Astrophysics Data System (ADS)

    Parlitz, U.; Junge, L.; Kocarev, L.

    1996-12-01

    The parameters of a given (chaotic) dynamical model are estimated from scalar time series by adapting a computer model until it synchronizes with the given data. This parameter identification method is applied to numerically generated and experimental data from Chua's circuit.

  2. Role of model selection criteria in geostatistical inverse estimation of statistical data- and model-parameters

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2011-07-01

    We analyze theoretically the ability of model quality (sometimes termed information or discrimination) criteria such as the negative log likelihood NLL, Bayesian criteria BIC and KIC and information theoretic criteria AIC, AICc, and HIC to estimate (1) the parameter vector ? of the variogram of hydraulic log conductivity (Y = ln K), and (2) statistical parameters ? and ? proportional to head and log conductivity measurement error variances, respectively, in the context of geostatistical groundwater flow inversion. Our analysis extends the work of Hernandez et al. (2003, 2006) and Riva et al. (2009), who developed nonlinear stochastic inverse algorithms that allow conditioning estimates of steady state and transient hydraulic heads, fluxes and their associated uncertainty on information about conductivity and head data collected in a randomly heterogeneous confined aquifer. Their algorithms are based on recursive numerical approximations of exact nonlocal conditional equations describing the mean and (co)variance of groundwater flow. Log conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete "pilot points." Optionally, the maximum likelihood function on which the inverse estimation of Y at pilot points is based may include a regularization term reflecting prior information about Y. The relative weight ? assigned to this term and its components ? and ?, as well as ? are evaluated separately from other model parameters to avoid bias and instability. This evaluation is done on the basis of criteria such as NLL, KIC, BIC, HIC, AIC, and AICc. We demonstrate theoretically that, whereas all these six criteria make it possible to estimate ?, KIC alone allows one to estimate validly ? and ? (and thus ?). We illustrate this discriminatory power of KIC numerically by using a differential evolution genetic search algorithm to minimize it in the context of a two-dimensional steady state groundwater flow

  3. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  4. Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine

    2002-01-01

    The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.

  5. Parameter estimation on gravitational waves from multiple coalescing binaries

    SciTech Connect

    Mandel, Ilya

    2010-04-15

    Future ground-based and space-borne interferometric gravitational-wave detectors may capture between tens and thousands of binary coalescence events per year. There is a significant and growing body of work on the estimation of astrophysically relevant parameters, such as masses and spins, from the gravitational-wave signature of a single event. This paper introduces a robust Bayesian framework for combining the parameter estimates for multiple events into a parameter distribution of the underlying event population. The framework can be readily deployed as a rapid post-processing tool.

  6. Estimation of Defect's Geometric Parameters with a Thermal Method

    NASA Astrophysics Data System (ADS)

    Protasov, A.; Sineglazov, V.

    2003-03-01

    The problem of estimation of flaws' parameters has been realized in two stages. At the first stage, it has been estimated relationship between temperature difference of a heated sample's surface and geometrical parameters of the flaw. For this purpose we have solved a direct heat conduction problem for various combination of the geometrical sizes of the flaw. At the second stage, we have solved an inverse heat conduction problem using the H - infinity method of identification. The results have shown good convergence to real parameters.

  7. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  8. Iterative methods for distributed parameter estimation in parabolic PDE

    SciTech Connect

    Vogel, C.R.; Wade, J.G.

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  9. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  10. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  11. NEFDS contamination model parameter estimation of powder contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Gibbs, Timothy J.; Messinger, David W.

    2016-05-01

    Hyperspectral signatures of powdered contaminated surfaces are challenging to characterize due to intimate mixing between materials. Most radiometric models have difficulties in recreating these signatures due to non-linear interactions between particles with different physical properties. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model is capable of recreating longwave hyperspectral signatures at any contamination mixture amount, but only for a limited selection of materials currently in the database. A method has been developed to invert the NEFDS model and perform parameter estimation on emissivity measurements from a variety of powdered materials on substrates. This model was chosen for its potential to accurately determine contamination coverage density as a parameter in the inverted model. Emissivity data were measured using a Designs and Prototypes fourier transform infrared spectrometer model 102 for different levels of contamination. Temperature emissivity separation was performed to convert data from measure radiance to estimated surface emissivity. Emissivity curves were then input into the inverted model and parameters were estimated for each spectral curve. A comparison of measured data with extrapolated model emissivity curves using estimated parameter values assessed performance of the inverted NEFDS contamination model. This paper will present the initial results of the experimental campaign and the estimated surface coverage parameters.

  12. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  13. Evaluating parasite densities and estimation of parameters in transmission systems.

    PubMed

    Heinzmann, D; Torgerson, P R

    2008-09-01

    Mathematical modelling of parasite transmission systems can provide useful information about host parasite interactions and biology and parasite population dynamics. In addition good predictive models may assist in designing control programmes to reduce the burden of human and animal disease. Model building is only the first part of the process. These models then need to be confronted with data to obtain parameter estimates and the accuracy of these estimates has to be evaluated. Estimation of parasite densities is central to this. Parasite density estimates can include the proportion of hosts infected with parasites (prevalence) or estimates of the parasite biomass within the host population (abundance or intensity estimates). Parasite density estimation is often complicated by highly aggregated distributions of parasites within the hosts. This causes additional challenges when calculating transmission parameters. Using Echinococcus spp. as a model organism, this manuscript gives a brief overview of the types of descriptors of parasite densities, how to estimate them and on the use of these estimates in a transmission model.

  14. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  15. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  16. Parameter estimation of general regression neural network using Bayesian approach

    NASA Astrophysics Data System (ADS)

    Choir, Achmad Syahrul; Prasetyo, Rindang Bangun; Ulama, Brodjol Sutijo Suprih; Iriawan, Nur; Fitriasari, Kartika; Dokhi, Mohammad

    2016-02-01

    General Regression Neural Network (GRNN) has been applied in a large number of forecasting/prediction problem. Generally, there are two types of GRNN: GRNN which is based on kernel density; and Mixture Based GRNN (MBGRNN) which is based on adaptive mixture model. The main problem on GRNN modeling lays on how its parameters were estimated. In this paper, we propose Bayesian approach and its computation using Markov Chain Monte Carlo (MCMC) algorithms for estimating the MBGRNN parameters. This method is applied in simulation study. In this study, its performances are measured by using MAPE, MAE and RMSE. The application of Bayesian method to estimate MBGRNN parameters using MCMC is straightforward but it needs much iteration to achieve convergence.

  17. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    PubMed

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  18. Accurate Parameter Estimation for Unbalanced Three-Phase System

    PubMed Central

    Chen, Yuan

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056

  19. Accurate parameter estimation for unbalanced three-phase system.

    PubMed

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  20. AMT-200S Motor Glider Parameter and Performance Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2011-01-01

    Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.

  1. Developing an Interpretation of Item Parameters for Personality Items: Content Correlates of Parameter Estimates.

    ERIC Educational Resources Information Center

    Zickar, Michael J.; Ury, Karen L.

    2002-01-01

    Attempted to relate content features of personality items to item parameter estimates from the partial credit model of E. Muraki (1990) by administering the Adjective Checklist (L. Goldberg, 1992) to 329 undergraduates. As predicted, the discrimination parameter was related to the item subtlety ratings of personality items but the level of word…

  2. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  3. Estimation of octanol/water partition coefficients using LSER parameters

    USGS Publications Warehouse

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  4. Estimation of the elastic Earth parameters from the SLR technique

    NASA Astrophysics Data System (ADS)

    Rutkowska, Milena

    ABSTRACT. The global elastic parameters (Love and Shida numbers) associated with the tide variations for satellite and stations are estimated from the Satellite Laser Ranging (SLR) data. The study is based on satellite observations taken by the global network of the ground stations during the period from January 1, 2005 until January 1, 2007 for monthly orbital arcs of Lageos 1 satellite. The observation equations contain unknown for orbital arcs, some constants and elastic Earth parameters which describe tide variations. The adjusted values are discussed and compared with geophysical estimations of Love numbers. All computations were performed employing the NASA software GEODYN II (eddy et al. 1990).

  5. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  6. Estimation of regional pulmonary perfusion parameters from microfocal angiograms

    NASA Astrophysics Data System (ADS)

    Clough, Anne V.; Al-Tinawi, Amir; Linehan, John H.; Dawson, Christopher A.

    1995-05-01

    An important application of functional imaging is the estimation of regional blood flow and volume using residue detection of vascular indicators. An indicator-dilution model applicable to tissue regions distal from the inlet site was developed. Theoretical methods for determining regional blood flow, volume, and mean transit time parameters from time-absorbance curves arise from this model. The robustness of the parameter estimation methods was evaluated using a computer-simulated vessel network model. Flow through arterioles, networks of capillaries, and venules was simulated. Parameter identification and practical implementation issues were addressed. The shape of the inlet concentration curve and moderate amounts of random noise did not effect the ability of the method to recover accurate parameter estimates. The parameter estimates degraded in the presence of significant dispersion of the measured inlet concentration curve as it traveled through arteries upstream from the microvascular region. The methods were applied to image data obtained using microfocal x-ray angiography to study the pulmonary microcirculation. Time- absorbance curves were acquired from a small feeding artery, the surrounding microvasculature and a draining vein of an isolated dog lung as contrast material passed through the field-of-view. Changes in regional microvascular volume were determined from these curves.

  7. Estimation of atmospheric turbulence parameters with wave front sensor data

    NASA Astrophysics Data System (ADS)

    Iroshnikov, N. G.; Koryabin, A. V.; Larichev, A. V.; Shmalhausen, V. I.; Andreeva, M. S.

    2012-11-01

    Estimates of atmospheric turbulence parameters can be calculated on the basis of data, obtained with wave front sensor. The method described is based on decomposition of phase fluctuations into Zernike series and analysis of statistics of this decomposition coefficients. Estimates of turbulence outer scale L0 and refractive index structure constant C2/n obtained in experiments with turbulence in water cell showed good agreement with previous results.

  8. Parameter estimation for the Euler-Bernoulli-beam

    NASA Technical Reports Server (NTRS)

    Graif, E.; Kunisch, K.

    1984-01-01

    An approximation involving cubic spline functions for parameter estimation problems in the Euler-Bernoulli-beam equation (phrased as an optimization problem with respect to the parameters) is described and convergence is proved. The resulting algorithm was implemented and several of the test examples are documented. It is observed that the use of penalty terms in the cost functional can improve the rate of convergence.

  9. Human ECG signal parameters estimation during controlled physical activity

    NASA Astrophysics Data System (ADS)

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  10. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  11. Parameter estimation for support vector anomaly detection in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Meth, Reuven; Ahn, James; Banerjee, Amit; Juang, Radford; Burlina, Philippe

    2012-06-01

    Hyperspectral Image (HSI) anomaly detectors typically employ local background modeling techniques to facilitate target detection from surrounding clutter. Global background modeling has been challenging due to the multi-modal content that must be automatically modeled to enable target/background separation. We have previously developed a support vector based anomaly detector that does not impose an a priori parametric model on the data and enables multi-modal modeling of large background regions with inhomogeneous content. Effective application of this support vector approach requires the setting of a kernel parameter that controls the tightness of the model fit to the background data. Estimation of the kernel parameter has typically considered Type I / false-positive error optimization due to the availability of background samples, but this approach has not proven effective for general application since these methods only control the false alarm level, without any optimization for maximizing detection. Parameter optimization with respect to Type II / false-negative error has remained elusive due to the lack of sufficient target training exemplars. We present an approach that optimizes parameter selection based on both Type I and Type II error criteria by introducing outliers based on existing hypercube content to guide parameter estimation. The approach has been applied to hyperspectral imagery and has demonstrated automatic estimation of parameters consistent with those that were found to be optimal, thereby providing an automated method for general anomaly detection applications.

  12. SCoPE: an efficient method of Cosmological Parameter Estimation

    SciTech Connect

    Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in

    2014-07-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.

  13. Estimating soil hydraulic parameters from transient flow experiments in a centrifuge using parameter optimization technique

    USGS Publications Warehouse

    Simunek, J.; Nimmo, J.R.

    2005-01-01

    A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time-variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field. Copyright 2005 by the American Geophysical Union.

  14. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  15. Cubic spline approximation techniques for parameter estimation in distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  16. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  17. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  18. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  19. Online vegetation parameter estimation using passive microwave remote sensing observations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In adaptive system identification the Kalman filter can be used to identify the coefficient of the observation operator of a linear system. Here the ensemble Kalman filter is tested for adaptive online estimation of the vegetation opacity parameter of a radiative transfer model. A state augmentatio...

  20. A parameter estimation framework for patient-specific hemodynamic computations

    NASA Astrophysics Data System (ADS)

    Itu, Lucian; Sharma, Puneet; Passerini, Tiziano; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2015-01-01

    We propose a fully automated parameter estimation framework for performing patient-specific hemodynamic computations in arterial models. To determine the personalized values of the windkessel models, which are used as part of the geometrical multiscale circulation model, a parameter estimation problem is formulated. Clinical measurements of pressure and/or flow-rate are imposed as constraints to formulate a nonlinear system of equations, whose fixed point solution is sought. A key feature of the proposed method is a warm-start to the optimization procedure, with better initial solution for the nonlinear system of equations, to reduce the number of iterations needed for the calibration of the geometrical multiscale models. To achieve these goals, the initial solution, computed with a lumped parameter model, is adapted before solving the parameter estimation problem for the geometrical multiscale circulation model: the resistance and the compliance of the circulation model are estimated and compensated. The proposed framework is evaluated on a patient-specific aortic model, a full body arterial model, and multiple idealized anatomical models representing different arterial segments. For each case it leads to the best performance in terms of number of iterations required for the computational model to be in close agreement with the clinical measurements.

  1. Matched filtering and parameter estimation of ringdown waveforms

    SciTech Connect

    Berti, Emanuele; Cardoso, Jaime; Cardoso, Vitor; Cavaglia, Marco

    2007-11-15

    Using recent results from numerical relativity simulations of nonspinning binary black hole mergers, we revisit the problem of detecting ringdown waveforms and of estimating the source parameters, considering both LISA and Earth-based interferometers. We find that Advanced LIGO and EGO could detect intermediate-mass black holes of mass up to {approx}10{sup 3}M{sub {center_dot}} out to a luminosity distance of a few Gpc. For typical multipolar energy distributions, we show that the single-mode ringdown templates presently used for ringdown searches in the LIGO data stream can produce a significant event loss (>10% for all detectors in a large interval of black hole masses) and very large parameter estimation errors on the black hole's mass and spin. We estimate that more than {approx}10{sup 6} templates would be needed for a single-stage multimode search. Therefore, we recommend a ''two-stage'' search to save on computational costs: single-mode templates can be used for detection, but multimode templates or Prony methods should be used to estimate parameters once a detection has been made. We update estimates of the critical signal-to-noise ratio required to test the hypothesis that two or more modes are present in the signal and to resolve their frequencies, showing that second-generation Earth-based detectors and LISA have the potential to perform no-hair tests.

  2. Hybrid fault diagnosis of nonlinear systems using neural parameter estimators.

    PubMed

    Sobhani-Tehrani, E; Talebi, H A; Khorasani, K

    2014-02-01

    This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems taking advantage of both the system's mathematical model and the adaptive nonlinear approximation capability of computational intelligence techniques. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPEs) associated with a set of single-parameter fault models. The NPEs continuously estimate unknown fault parameters (FPs) that are indicators of faults in the system. Two NPE structures, series-parallel and parallel, are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. In contrast, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the two NPEs that originally assumes full state measurements for systems that have only partial state measurements. The proposed FTO is a neural state estimator that can estimate unmeasured states even in the presence of faults. The estimated and the measured states then comprise the inputs to the two proposed FDII schemes. Simulation results for FDII of reaction wheels of a three-axis stabilized satellite in the presence of disturbances and noise demonstrate the effectiveness of the proposed FDII solutions under partial state measurements.

  3. PhyloPars: estimation of missing parameter values using phylogeny.

    PubMed

    Bruggeman, Jorn; Heringa, Jaap; Brandt, Bernd W

    2009-07-01

    A wealth of information on metabolic parameters of a species can be inferred from observations on species that are phylogenetically related. Phylogeny-based information can complement direct empirical evidence, and is particularly valuable if experiments on the species of interest are not feasible. The PhyloPars web server provides a statistically consistent method that combines an incomplete set of empirical observations with the species phylogeny to produce a complete set of parameter estimates for all species. It builds upon a state-of-the-art evolutionary model, extended with the ability to handle missing data. The resulting approach makes optimal use of all available information to produce estimates that can be an order of magnitude more accurate than ad-hoc alternatives. Uploading a phylogeny and incomplete feature matrix suffices to obtain estimates of all missing values, along with a measure of certainty. Real-time cross-validation provides further insight in the accuracy and bias expected for estimated values. The server allows for easy, efficient estimation of metabolic parameters, which can benefit a wide range of fields including systems biology and ecology. PhyloPars is available at: http://www.ibi.vu.nl/programs/phylopars/.

  4. Estimation of rice biophysical parameters using multitemporal RADARSAT-2 images

    NASA Astrophysics Data System (ADS)

    Li, S.; Ni, P.; Cui, G.; He, P.; Liu, H.; Li, L.; Liang, Z.

    2016-04-01

    Compared with optical sensors, synthetic aperture radar (SAR) has the capability of acquiring images in all-weather conditions. Thus, SAR images are suitable for using in rice growth regions that are characterized by frequent cloud cover and rain. The objective of this paper was to evaluate the probability of rice biophysical parameters estimation using multitemporal RADARSAT-2 images, and to develop the estimation models. Three RADARSTA-2 images were acquired during the rice critical growth stages in 2014 near Meishan, Sichuan province, Southwest China. Leaf area index (LAI), the fraction of photosynthetically active radiation (FPAR), height, biomass and canopy water content (WC) were observed at 30 experimental plots over 5 periods. The relationship between RADARSAT-2 backscattering coefficients (σ 0) or their ratios and rice biophysical parameters were analysed. These biophysical parameters were significantly and consistently correlated with the VV and VH σ 0 ratio (σ 0 VV/ σ 0 VH) throughout all growth stages. The regression model were developed between biophysical parameters and σ 0 VV/ σ 0 VH. The results suggest that the RADARSAT-2 data has great potential capability for the rice biophysical parameters estimation and the timely rice growth monitoring.

  5. Parameter estimation of an air-bearing suspended test table

    NASA Astrophysics Data System (ADS)

    Fu, Zhenxian; Lin, Yurong; Liu, Yang; Chen, Xinglin; Chen, Fang

    2015-02-01

    A parameter estimation approach is proposed for parameter determination of a 3-axis air-bearing suspended test table. The table is to provide a balanced and frictionless environment for spacecraft ground test. To balance the suspension, the mechanical parameters of the table, including its angular inertias and centroid deviation from its rotating center, have to be determined first. Then sliding masses on the table can be adjusted by stepper motors to relocate the centroid of the table to its rotating center. Using the angular momentum theorem and the coriolis theorem, dynamic equations are derived describing the rotation of the table under the influence of gravity imbalance torque and activating torques. To generate the actuating torques, use of momentum wheels is proposed, whose virtue is that no active control is required to the momentum wheels, which merely have to spin at constant rates, thus avoiding the singularity problem and the difficulty of precisely adjusting the output torques, issues associated with control moment gyros. The gyroscopic torques generated by the momentum wheels, as they are forced by the table to precess, are sufficient to activate the table for parameter estimation. Then least-square estimation is be employed to calculate the desired parameters. The effectiveness of the method is validated by simulation.

  6. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  7. Estimation of uncertain material parameters using modal test data

    SciTech Connect

    Veers, P.S.; Laird, D.L.; Carne, T.G.; Sagartz, M.J.

    1997-11-01

    Analytical models of wind turbine blades have many uncertainties, particularly with composite construction where material properties and cross-sectional dimension may not be known or precisely controllable. In this paper the authors demonstrate how modal testing can be used to estimate important material parameters and to update and improve a finite-element (FE) model of a prototype wind turbine blade. An example of prototype blade is used here to demonstrate how model parameters can be identified. The starting point is an FE model of the blade, using best estimates for the material constants. Frequencies of the lowest fourteen modes are used as the basis for comparisons between model predictions and test data. Natural frequencies and mode shapes calculated with the FE model are used in an optimal test design code to select instrumentation (accelerometer) and excitation locations that capture all the desired mode shapes. The FE model is also used to calculate sensitivities of the modal frequencies to each of the uncertain material parameters. These parameters are estimated, or updated, using a weighted least-squares technique to minimize the difference between test frequencies and predicted results. Updated material properties are determined for axial, transverse, and shear moduli in two separate regions of the blade cross section: in the central box, and in the leading and trailing panels. Static FE analyses are then conducted with the updated material parameters to determine changes in effective beam stiffness and buckling loads.

  8. [Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].

    PubMed

    Lu, Yu; Li, Xiang-Ru; Yang, Tan

    2014-11-01

    It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex

  9. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  10. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  11. Modal parameters estimation using ant colony optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2016-08-01

    The paper puts forward a new estimation method of modal parameters for dynamical systems. The problem of parameter estimation has been simplified to optimisation which is carried out using the ant colony system algorithm. The proposed method significantly constrains the solution space, determined on the basis of frequency plots of the receptance FRFs (frequency response functions) for objects presented in the frequency domain. The constantly growing computing power of readily accessible PCs makes this novel approach a viable solution. The combination of deterministic constraints of the solution space with modified ant colony system algorithms produced excellent results for systems in which mode shapes are defined by distinctly different natural frequencies and for those in which natural frequencies are similar. The proposed method is fully autonomous and the user does not need to select a model order. The last section of the paper gives estimation results for two sample frequency plots, conducted with the proposed method and the PolyMAX algorithm.

  12. Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Lui, Kenneth W. K.; So, H. C.

    2009-12-01

    We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.

  13. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    NASA Astrophysics Data System (ADS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  14. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  15. Estimating Arrhenius parameters using temperature programmed molecular dynamics.

    PubMed

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times. PMID:27448871

  16. Instrumental noise estimates stabilize and quantify endothelial cell micro-impedance barrier function parameter estimates

    SciTech Connect

    English, Anthony E; Moy, Alan B; Kruse, Kara L; Ward, Richard C; Kirkpatrick, Stacy S; GoldmanM.D., Mitchell H

    2009-04-01

    A novel transcellular micro-impedance biosensor, referred to as the electric cell-substrate impedance sensor or ECIS, has become increasingly applied to the study and quantification of endothelial cell physiology. In principle, frequency dependent impedance measurements obtained from this sensor can be used to estimate the cell cell and cell matrix impedance components of endothelial cell barrier function based on simple geometric models. Few studies, however, have examined the numerical optimization of these barrier function parameters and established their error bounds. This study, therefore, illustrates the implementation of a multi-response Levenberg Marquardt algorithm that includes instrumental noise estimates and applies it to frequency dependent porcine pulmonary artery endothelial cell impedance measurements. The stability of cell cell, cell matrix and membrane impedance parameter estimates based on this approach is carefully examined, and several forms of parameter instability and refinement illustrated. Including frequency dependent noise variance estimates in the numerical optimization reduced the parameter value dependence on the frequency range of measured impedances. The increased stability provided by a multi-response non-linear fit over one-dimensional algorithms indicated that both real and imaginary data should be used in the parameter optimization. Error estimates based on single fits and Monte Carlo simulations showed that the model barrier parameters were often highly correlated with each other. Independently resolving the different parameters can, therefore, present a challenge to the experimentalist and demand the use of non-linear multivariate statistical methods when comparing different sets of parameters.

  17. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  18. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  19. Improving parameter priors for data-scarce estimation problems

    NASA Astrophysics Data System (ADS)

    Almeida, Susana; Bulygina, Nataliya; McIntyre, Neil; Wagener, Thorsten; Buytaert, Wouter

    2013-09-01

    Runoff prediction in ungauged catchments is a recurrent problem in hydrology. Conceptual models are usually calibrated by defining a feasible parameter range and then conditioning parameter sets on observed system responses, e.g., streamflow. In ungauged catchments, several studies condition models on regionalized response signatures, such as runoff ratio or base flow index, using a Bayesian procedure. In this technical note, the Model Parameter Estimation Experiment (MOPEX) data set is used to explore the impact on model performance of assumptions made about the prior distribution. In particular, the common assumption of uniform prior on parameters is shown to be unsuitable. This is because the uniform prior on parameters maps onto skewed response signature priors that can counteract the valuable information gained from the regionalization. To address this issue, we test a methodological development based on an initial transformation of the uniform prior on parameters into a prior that maps to a uniform response signature distribution. We demonstrate that this method contributes to improved estimation of the response signatures.

  20. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively. PMID:26978937

  1. [Automatic Measurement of the Stellar Atmospheric Parameters Based Mass Estimation].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Luo, A-li; Zhao, Yong-heng

    2015-11-01

    We have collected massive stellar spectral data in recent years, which leads to the research on the automatic measurement of stellar atmospheric physical parameters (effective temperature Teff, surface gravity log g and metallic abundance [Fe/ H]) become an important issue. To study the automatic measurement of these three parameters has important significance for some scientific problems, such as the evolution of the universe and so on. But the research of this problem is not very widely, some of the current methods are not able to estimate the values of the stellar atmospheric physical parameters completely and accurately. So in this paper, an automatic method to predict stellar atmospheric parameters based on mass estimation was presented, which can achieve the prediction of stellar effective temperature Teff, surface gravity log g and metallic abundance [Fe/H]. This method has small amount of computation and fast training speed. The main idea of this method is that firstly it need us to build some mass distributions, secondly the original spectral data was mapped into the mass space and then to predict the stellar parameter with the support vector regression (SVR) in the mass space. we choose the stellar spectral data from the United States SDSS-DR8 for the training and testing. We also compared the predicted results of this method with the SSPP and achieve higher accuracy. The predicted results are more stable and the experimental results show that the method is feasible and can predict the stellar atmospheric physical parameters effectively.

  2. Recursive Objects--An Object Oriented Presentation of Recursion

    ERIC Educational Resources Information Center

    Sher, David B.

    2004-01-01

    Generally, when recursion is introduced to students the concept is illustrated with a toy (Towers of Hanoi) and some abstract mathematical functions (factorial, power, Fibonacci). These illustrate recursion in the same sense that counting to 10 can be used to illustrate a for loop. These are all good illustrations, but do not represent serious…

  3. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    NASA Astrophysics Data System (ADS)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  4. Adaptive Estimation of Intravascular Shear Rate Based on Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Takeda, Naoto

    2008-05-01

    The relationships between the intravascular wall shear stress, controlled by flow dynamics, and the progress of arteriosclerosis plaque have been clarified by various studies. Since the shear stress is determined by the viscosity coefficient and shear rate, both factors must be estimated accurately. In this paper, an adaptive method for improving the accuracy of quantitative shear rate estimation was investigated. First, the parameter dependence of the estimated shear rate was investigated in terms of the differential window width and the number of averaged velocity profiles based on simulation and experimental data, and then the shear rate calculation was optimized. The optimized result revealed that the proposed adaptive method of shear rate estimation was effective for improving the accuracy of shear rate calculation.

  5. Estimating parameters with pre-specified accuracies in distributed parameter systems using optimal experiment design

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Bombois, X.; Mansoori, M.; Van den Hof, Paul M. J.

    2016-08-01

    Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.

  6. Multiple concurrent recursive least squares identification with application to on-line spacecraft mass-property identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2006-01-01

    The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.

  7. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  8. Bayesian hemodynamic parameter estimation by bolus tracking perfusion weighted imaging.

    PubMed

    Boutelier, Timothé; Kudo, Koshuke; Pautot, Fabrice; Sasaki, Makoto

    2012-07-01

    A delay-insensitive probabilistic method for estimating hemodynamic parameters, delays, theoretical residue functions, and concentration time curves by computed tomography (CT) and magnetic resonance (MR) perfusion weighted imaging is presented. Only a mild stationarity hypothesis is made beyond the standard perfusion model. New microvascular parameters with simple hemodynamic interpretation are naturally introduced. Simulations on standard digital phantoms show that the method outperforms the oscillating singular value decomposition (oSVD) method in terms of goodness-of-fit, linearity, statistical and systematic errors on all parameters, especially at low signal-to-noise ratios (SNRs). Delay is always estimated sharply with user-supplied resolution and is purely arterial, by contrast to oSVD time-to-maximum TMAX that is very noisy and biased by mean transit time (MTT), blood volume, and SNR. Residue functions and signals estimates do not suffer overfitting anymore. One CT acute stroke case confirms simulation results and highlights the ability of the method to reliably estimate MTT when SNR is low. Delays look promising for delineating the arterial occlusion territory and collateral circulation. PMID:22410325

  9. Anisotropic parameter estimation using velocity variation with offset analysis

    SciTech Connect

    Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A.

    2013-09-09

    Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ε and δ, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter δ. The second method is inversion method using linear approach where vertical velocity, δ, and ε is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that δ value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ε value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.

  10. Online Parameter Estimation and Adaptive Control of Magnetic Wire Actuators

    NASA Astrophysics Data System (ADS)

    Karve, Harshwardhan

    Cantilevered magnetic wires and fibers can be used as actuators in microfluidic applications. The actuator may be unstable in some range of displacements. Precise position control is required for actuation. The goal of this work is to develop position controllers for cantilevered magnetic wires. A simple exact model knowledge (EMK) controller can be used for position control, but the actuator needs to be modeled accurately for the EMK controller to work. Continuum models have been proposed for magnetic wires in literature. Reduced order models have also been proposed. A one degree of freedom model sufficiently describes the dynamics of a cantilevered wire in the field of one magnet over small displacements. This reduced order model is used to develop the EMK controller here. The EMK controller assumes that model parameters are known accurately. Some model parameters depend on the magnetic field. However, the effect of the magnetic field on the wire is difficult to measure in practice. Stability analysis shows that an inaccurate estimate of the magnetic field introduces parametric perturbations in the closed loop system. This makes the system less robust to disturbances. Therefore, the model parameters need to be estimated accurately for the EMK controller to work. An adaptive observer that can estimate system parameters on-line and reduce parametric perturbations is designed here. The adaptive observer only works if the system is stable. The EMK controller is not guaranteed to stabilize the system under perturbations. Precise tuning of parameters is required to stabilize the system using the EMK controller. Therefore, a controller that stabilizes the system using imprecise model parameters is required for the observer to work as intended. The adaptive observer estimates system states and parameters. These states and parameters are used here to implement an indirect adaptive controller. This indirect controller can stabilize the system using imprecise initial

  11. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  12. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  13. Improving the quality of parameter estimates obtained from slug tests

    SciTech Connect

    Butler, J.J. Jr.; McElwee, C.D.; Liu, W.

    1996-05-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed at improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (H{sub 0}) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to e introduced in a near-instantaneous manner and should allow a good estimate of H{sub 0} to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  14. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  15. Estimation of atmospheric parameters from time-lapse imagery

    NASA Astrophysics Data System (ADS)

    McCrae, Jack E.; Basu, Santasri; Fiorino, Steven T.

    2016-05-01

    A time-lapse imaging experiment was conducted to estimate various atmospheric parameters for the imaging path. Atmospheric turbulence caused frame-to-frame shifts of the entire image as well as parts of the image. The statistics of these shifts encode information about the turbulence strength (as characterized by Cn2, the refractive index structure function constant) along the optical path. The shift variance observed is simply proportional to the variance of the tilt of the optical field averaged over the area being tracked. By presuming this turbulence follows the Kolmogorov spectrum, weighting functions can be derived which relate the turbulence strength along the path to the shifts measured. These weighting functions peak at the camera and fall to zero at the object. The larger the area observed, the more quickly the weighting function decays. One parameter we would like to estimate is r0 (the Fried parameter, or atmospheric coherence diameter.) The weighting functions derived for pixel sized or larger parts of the image all fall faster than the weighting function appropriate for estimating the spherical wave r0. If we presume Cn2 is constant along the path, then an estimate for r0 can be obtained for each area tracked, but since the weighting function for r0 differs substantially from that for every realizable tracked area, it can be expected this approach would yield a poor estimator. Instead, the weighting functions for a number of different patch sizes can be combined through the Moore-Penrose pseudo-inverse to create a new weighting function which yields the least-squares optimal linear combination of measurements for estimation of r0. This approach is carried out, and it is observed that this approach is somewhat noisy because the pseudo-inverse assigns weights much greater than one to many of the observations.

  16. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  17. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes. PMID:25040235

  18. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  19. Estimation of economic parameters of U.S. hydropower resources

    SciTech Connect

    Hall, Douglas G.; Hunt, Richard T.; Reeves, Kelly S.; Carroll, Greg R.

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  20. Maximum-likelihood estimation of circle parameters via convolution.

    PubMed

    Zelniker, Emanuel E; Clarkson, I Vaughan L

    2006-04-01

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kåsa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Cramér-Rao Lower Bound in ideal images and in both real and synthetic digital images. PMID:16579374

  1. ESTIMATION OF DISTANCES TO STARS WITH STELLAR PARAMETERS FROM LAMOST

    SciTech Connect

    Carlin, Jeffrey L.; Newberg, Heidi Jo; Liu, Chao; Deng, Licai; Li, Guangwei; Luo, A-Li; Wu, Yue; Yang, Ming; Zhang, Haotong; Beers, Timothy C.; Chen, Li; Hou, Jinliang; Smith, Martin C.; Guhathakurta, Puragra; Lépine, Sébastien; Yanny, Brian; Zheng, Zheng

    2015-07-15

    We present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star’s absolute magnitude. This technique is tailored specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ∼5° diameter “plate” that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and target selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ∼20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ∼40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.

  2. Terrain mechanical parameters online estimation for lunar rovers

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Cui, Pingyuan; Ju, Hehua

    2007-11-01

    This paper presents a new method for terrain mechanical parameters estimation for a wheeled lunar rover. First, after deducing the detailed distribution expressions of normal stress and sheer stress at the wheel-terrain interface, the force/torque balance equations of the drive wheel for computing terrain mechanical parameters is derived through analyzing the rigid drive wheel of a lunar rover which moves with uniform speed in deformable terrain. Then a two-points Guass-Lengendre numerical integral method is used to simplify the balance equations, after simplifying and rearranging the resolve model are derived which are composed of three non-linear equations. Finally the iterative method of Newton and the steepest descent method are combined to solve the non-linear equations, and the outputs of on-board virtual sensors are used for computing terrain key mechanical parameters i.e. internal friction angle and press-sinkage parameters. Simulation results show correctness under high noises disturbance and effectiveness with low computational complexity, which allows a lunar rover for online terrain mechanical parameters estimation.

  3. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  4. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  5. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  6. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  7. Analysis of neutron scattering data: Visualization and parameter estimation

    SciTech Connect

    Beauchamp, J.J.; Fedorov, V.; Hamilton, W.A.; Yethiraj, M.

    1998-09-01

    Traditionally, small-angle neutron and x-ray scattering (SANS and SAXS) data analysis requires measurements of the signal and corrections due to the empty sample container, detector efficiency and time-dependent background. These corrections are then made on a pixel-by-pixel basis and estimates of relevant parameters (e.g., the radius of gyration) are made using the corrected data. This study was carried out in order to determine whether treatment of the detector efficiency and empty sample cell in a more statistically sound way would significantly reduce the uncertainties in the parameter estimators. Elements of experiment design are shortly discussed in this paper. For instance, we studied the way the time for a measurement should be optimally divided between the counting for signal, background and detector efficiency. In Section 2 we introduce the commonly accepted models for small-angle neutron and x-scattering and confine ourselves to the Guinier and Rayleigh models and their minor generalizations. The traditional approaches of data analysis are discussed only to the extent necessary to allow their comparison with the proposed techniques. Section 3 describes the main stages of the proposed method: visual data exploration, fitting the detector sensitivity function, and fitting a compound model. This model includes three additive terms describing scattering by the sampler, scattering with an empty container and a background noise. We compare a few alternatives for the first term by applying various scatter plots and computing sums of standardized squared residuals. Possible corrections due to smearing effects and randomness of estimated parameters are also shortly discussed. In Section 4 the robustness of the estimators with respect to low and upper bounds imposed on the momentum value is discussed. We show that for the available data set the most accurate and stable estimates are generated by models containing double terms either of Guinier's or Rayleigh's type

  8. On Using Exponential Parameter Estimators with an Adaptive Controller

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  9. Estimation of Geodetic and Geodynamical Parameters with VieVS

    NASA Technical Reports Server (NTRS)

    Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald

    2010-01-01

    Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.

  10. Identification of vehicle parameters and estimation of vertical forces

    NASA Astrophysics Data System (ADS)

    Imine, H.; Fridman, L.; Madani, T.

    2015-12-01

    The aim of the present work is to estimate the vertical forces and to identify the unknown dynamic parameters of a vehicle using the sliding mode observers approach. The estimation of vertical forces needs a good knowledge of dynamic parameters such as damping coefficient, spring stiffness and unsprung masses, etc. In this paper, suspension stiffness and unsprung masses have been identified by the Least Square Method. Real-time tests have been carried out on an instrumented static vehicle, excited vertically by hydraulic jacks. The vehicle is equipped with different sensors in order to measure its dynamics. The measurements coming from these sensors have been considered as unknown inputs of the system. However, only the roll angle and the suspension deflection measurements have been used in order to perform the observer. Experimental results are presented and discussed to show the quality of the proposed approach.

  11. CosmoSIS: A System for MC Parameter Estimation

    SciTech Connect

    Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  12. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  13. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  14. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  15. Multi-criteria parameter estimation for the Unified Land Model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-08-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operational Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting Model, is the basis for these experiments. Calibrations were made using each of the data sets individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large scales, calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  16. Multi-criteria parameter estimation for the unified land model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-04-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operation Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting model, is the basis for these experiments. Calibrations were made using each of the criteria individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large-scales calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  17. Revised digestive parameter estimates for the Molly cow model.

    PubMed

    Hanigan, M D; Appuhamy, J A D R N; Gregorini, P

    2013-06-01

    The Molly cow model represents nutrient digestion and metabolism based on a mechanistic representation of the key biological elements. Digestive parameters were derived ad hoc from literature observations or were assumed. Preliminary work determined that several of these parameters did not represent the true relationships. The current work was undertaken to derive ruminal and postruminal digestive parameters and to use a meta-approach to assess the effects of interactions among nutrients and identify areas of model weakness. Model predictions were compared with a database of literature observations containing 233 treatment means. Mean square prediction errors were assessed to characterize model performance. Ruminal pH prediction equations had substantial mean bias, which caused problems in fiber digestion and microbial growth predictions. The pH prediction equation was reparameterized simultaneously with the several ruminal and postruminal digestion parameters, resulting in more realistic parameter estimates for ruminal fiber digestion, and moderate reductions in prediction errors for pH, neutral detergent fiber, acid detergent fiber, and microbial N outflow from the rumen; and postruminal digestion of neutral detergent fiber, acid detergent fiber, and protein. Prediction errors are still large for ruminal ammonia and outflow of starch from the rumen. The gain in microbial efficiency associated with fat feeding was found to be more than twice the original estimate, but in contrast to prior assumptions, fat feeding did not exert negative effects on fiber and protein degradation in the rumen. Microbial responses to ruminal ammonia concentrations were half saturated at 0.2mM versus the original estimate of 1.2mM. Residuals analyses indicated that additional progress could be made in predicting microbial N outflow, volatile fatty acid production and concentrations, and cycling of N between blood and the rumen. These additional corrections should lead to an even more

  18. Parameter estimation for inspiraling eccentric compact binaries including pericenter precession

    NASA Astrophysics Data System (ADS)

    Mikóczi, Balázs; Kocsis, Bence; Forgács, Péter; Vasúth, Mátyás

    2012-11-01

    Inspiraling supermassive black hole binary systems with high orbital eccentricity are important sources for space-based gravitational wave observatories like the Laser Interferometer Space Antenna. Eccentricity adds orbital harmonics to the Fourier transform of the gravitational wave signal, and relativistic pericenter precession leads to a three-way splitting of each harmonic peak. We study the parameter estimation accuracy for such waveforms with different initial eccentricity, using the Fisher matrix method and a Monte Carlo sampling of the initial binary orientation. The eccentricity improves the parameter estimation by breaking degeneracies between different parameters. In particular, we find that the source localization precision improves significantly for higher-mass binaries due to eccentricity. The typical sky position errors are ˜1deg for a nonspinning, 107M⊙, equal-mass binary at redshift z=1, if the initial eccentricity 1 yr before merger is e0˜0.6. Pericenter precession does not affect the source localization accuracy significantly, but it does further improve the mass and eccentricity estimation accuracy systematically by a factor of 3-10 for masses between 106M⊙ and 107M⊙ for e0˜0.3.

  19. Estimating Hydraulic Parameters When Poroelastic Effects Are Significant

    USGS Publications Warehouse

    Berg, S.J.; Hsieh, P.A.; Illman, W.A.

    2011-01-01

    For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.

  20. Hydraulic parameters estimation from well logging resistivity and geoelectrical measurements

    NASA Astrophysics Data System (ADS)

    Perdomo, S.; Ainchil, J. E.; Kruse, E.

    2014-06-01

    In this paper, a methodology is suggested for deriving hydraulic parameters, such as hydraulic conductivity or transmissivity combining classical hydrogeological data with geophysical measurements. Estimates values of transmissivity and conductivity, with this approach, can reduce uncertainties in numerical model calibration and improve data coverage, reducing time and cost of a hydrogeological investigation at a regional scale. The conventional estimation of hydrogeological parameters needs to be done by analyzing wells data or laboratory measurements. Furthermore, to make a regional survey many wells should be considered, and the location of each one plays an important role in the interpretation stage. For this reason, the use of geoelectrical methods arises as an effective complementary technique, especially in developing countries where it is necessary to optimize resources. By combining hydraulic parameters from pumping tests and electrical resistivity from well logging profiles, it was possible to adjust three empirical laws in a semi-confined alluvial aquifer in the northeast of the province of Buenos Aires (Argentina). These relations were also tested to be used with surficial geoelectrical data. The hydraulic conductivity and transmissivity estimated in porous material were according to expected values for the region (20 m/day; 457 m2/day), and are very consistent with previous results from other authors (25 m/day and 500 m2/day). The methodology described could be used with similar data sets and applied to other areas with similar hydrogeological conditions.

  1. An investigation of numerical grid effects in parameter estimation.

    PubMed

    Zyvoloski, George A; Vesselinov, Velimir V

    2006-01-01

    Modern ground water characterization and remediation projects routinely require calibration and inverse analysis of large three-dimensional numerical models of complex hydrogeological systems. Hydrogeologic complexity can be prompted by various aquifer characteristics including complicated spatial hydrostratigraphy and aquifer recharge from infiltration through an unsaturated zone. To keep the numerical models computationally efficient, compromises are frequently made in the model development, particularly, about resolution of the computational grid and numerical representation of the governing flow equation. The compromise is required so that the model can be used in calibration, parameter estimation, performance assessment, and analysis of sensitivity and uncertainty in model predictions. However, grid properties and resolution as well as applied computational schemes can have large effects on forward-model predictions and on inverse parameter estimates. We investigate these effects for a series of one- and two-dimensional synthetic cases representing saturated and variably saturated flow problems. We show that "conformable" grids, despite neglecting terms in the numerical formulation, can lead to accurate solutions of problems with complex hydrostratigraphy. Our analysis also demonstrates that, despite slower computer run times and higher memory requirements for a given problem size, the control volume finite-element method showed an advantage over finite-difference techniques in accuracy of parameter estimation for a given grid resolution for most of the test problems.

  2. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  3. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  4. Estimating cellular parameters through optimization procedures: elementary principles and applications.

    PubMed

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880

  5. Estimating cellular parameters through optimization procedures: elementary principles and applications

    PubMed Central

    Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki

    2015-01-01

    Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest. PMID:25784880

  6. Spacecraft design impacts on the post-Newtonian parameter estimation

    NASA Astrophysics Data System (ADS)

    Schuster, Anja Katharina; et al.

    2015-08-01

    The ESA mission BepiColombo, reaching out to explore the elusive planet Mercury, features unprecedented tracking techniques. The highly precise orbit determination around Mercury is a compelling opportunity for a modern test of General Relativity (GR). Using the software tool GRETCHEN incorporating the Square Root Information Filter (SRIF), MPO's orbit is simulated and the post-Newtonian parameters (PNP) are estimated. In this work, the influence of a specific constraint of the Mercury Orbiter Radio science Experiment (MORE) on the achievable accuracy of the PNP estimates is investigated. The power system design of the spacecraft requires that ±35° around perihelion the Ka transponder needs to be switched off, thus radiometric data is only gathered via X band. This analysis shows the impact of this constraint on the achievable accuracy of PNP estimates. On a bigger scale, if GR shows some violation at a detectable level it inevitably leads to its invalidation.

  7. Estimation of multiexponential fluorescence decay parameters using compressive sensing.

    PubMed

    Yang, Sejung; Lee, Joohyun; Lee, Youmin; Lee, Minyung; Lee, Byung-Uk

    2015-09-01

    Fluorescence lifetime imaging microscopy (FLIM) is a microscopic imaging technique to present an image of fluorophore lifetimes. It circumvents the problems of typical imaging methods such as intensity attenuation from depth since a lifetime is independent of the excitation intensity or fluorophore concentration. The lifetime is estimated from the time sequence of photon counts observed with signal-dependent noise, which has a Poisson distribution. Conventional methods usually estimate single or biexponential decay parameters. However, a lifetime component has a distribution or width, because the lifetime depends on macromolecular conformation or inhomogeneity. We present a novel algorithm based on a sparse representation which can estimate the distribution of lifetime. We verify the enhanced performance through simulations and experiments.

  8. Framework for estimating tumour parameters using thermal imaging

    PubMed Central

    Umadevi, V.; Raghavan, S.V.; Jaipurkar, Sandeep

    2011-01-01

    Background & objectives: Non-invasive and non-ionizing medical imaging techniques are safe as these can be repeatedly used on as individual and are applicable across all age groups. Breast thermography is a non-invasive and non-ionizing medical imaging that can be potentially used in breast cancer detection and diagnosis. In this study, we used breast thermography to estimate the tumour contour from the breast skin surface temperature. Methods: We proposed a framework called infrared thermography based image construction (ITBIC) to estimate tumour parameters such as size and depth from cancerous breast skin surface temperature data. Markov Chain Monte Carlo method was used to enhance the accuracy of estimation in order to reflect clearly realistic situation. Results: We validated our method experimentally using Watermelon and Agar models. For the Watermelon experiment error in estimation of size and depth parameters was 1.5 and 3.8 per cent respectively. For the Agar model it was 0 and 8 per cent respectively. Further, thermal breast screening was done on female volunteers and compared it with the magnetic resonance imaging. The results were positive and encouraging. Interpretation & conclusions: ITBIC is computationally fast thermal imaging system and is perhaps affordable. Such a system will be useful for doctors or radiologists for breast cancer diagnosis. PMID:22199114

  9. Estimating demographic parameters using hidden process dynamic models.

    PubMed

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  10. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  11. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable. PMID:25763948

  12. CosmoSIS: A system for MC parameter estimation

    SciTech Connect

    Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.

    2015-01-01

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could be used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.

  13. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.

    2015-01-01

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  14. Effects of parameter estimation on maximum-likelihood bootstrap analysis.

    PubMed

    Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack

    2010-08-01

    Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data.

  15. Entanglement detection and parameter estimation of quantum channels

    NASA Astrophysics Data System (ADS)

    Suzuki, Jun

    2016-10-01

    We derive a general criterion to detect entangled states in multipartite systems based on the symmetric logarithmic derivative quantum Fisher information. This criterion is a direct consequence of the fact that separable states do not improve the accuracy upon estimating the one-parameter family of quantum channels. Our result is a generalization of the previously known criterion for the one-parameter unitary channel to any one-parameter quantum channel. Several variants of the proposed criterion are also given, and then the general structure is revealed behind this sort of entanglement criteria based on quantum Fisher information. We discuss several examples to illustrate our criterion. In the last part, we briefly show how the proposed criterion can be extended to a more general setting that is applicable for a certain class of open quantum systems, and we discuss how to detect entangled states even in the presence of decoherence.

  16. Recursivity in Lingua Cosmica

    NASA Astrophysics Data System (ADS)

    Ollongren, Alexander

    2011-02-01

    In a sequence of papers on the topic of message construction for interstellar communication by means of a cosmic language, the present author has discussed various significant requirements such a lingua should satisfy. The author's Lingua Cosmica is a (meta) system for annotating contents of possibly large-scale messages for ETI. LINCOS, based on formal constructive logic, was primarily designed for dealing with logic contents of messages but is also applicable for denoting structural properties of more general abstractions embedded in such messages. The present paper explains ways and means for achieving this for a special case: recursive entities. As usual two stages are involved: first the domain of discourse is enriched with suitable representations of the entities concerned, after which properties over them can be dealt with within the system itself. As a representative example the case of Russian dolls (Matrjoshka's) is discussed in some detail and relations with linguistic structures in natural languages are briefly exploited.

  17. Biases on cosmological parameter estimators from galaxy cluster number counts

    SciTech Connect

    Penna-Lima, M.; Wuensche, C.A.; Makler, M. E-mail: martin@cbpf.br

    2014-05-01

    Sunyaev-Zel'dovich (SZ) surveys are promising probes of cosmology — in particular for Dark Energy (DE) —, given their ability to find distant clusters and provide estimates for their mass. However, current SZ catalogs contain tens to hundreds of objects and maximum likelihood estimators may present biases for such sample sizes. In this work we study estimators from cluster abundance for some cosmological parameters, in particular the DE equation of state parameter w{sub 0}, the amplitude of density fluctuations σ{sub 8}, and the Dark Matter density parameter Ω{sub c}. We begin by deriving an unbinned likelihood for cluster number counts, showing that it is equivalent to the one commonly used in the literature. We use the Monte Carlo approach to determine the presence of bias using this likelihood and study its behavior with both the area and depth of the survey, and the number of cosmological parameters fitted. Our fiducial models are based on the South Pole Telescope (SPT) SZ survey. Assuming perfect knowledge of mass and redshift some estimators have non-negligible biases. For example, the bias of σ{sub 8} corresponds to about 40% of its statistical error bar when fitted together with Ω{sub c} and w{sub 0}. Including a SZ mass-observable relation decreases the relevance of the bias, for the typical sizes of current SZ surveys. Considering a joint likelihood for cluster abundance and the so-called ''distance priors'', we obtain that the biases are negligible compared to the statistical errors. However, we show that the biases from SZ estimators do not go away with increasing sample sizes and they may become the dominant source of error for an all sky survey at the SPT sensitivity. Finally, we compute the confidence regions for the cosmological parameters using Fisher matrix and profile likelihood approaches, showing that they are compatible with the Monte Carlo ones. The results of this work validate the use of the current maximum likelihood methods for

  18. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  19. Limitations of polynomial chaos in Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Lu, F.; Morzfeld, M.; Tu, X.; Chorin, A. J.

    2014-12-01

    In many science or engineering problems one needs to estimate parameters in a model on the basis of noisy data. In a Bayesian approach, prior information and the likelihood of the model and data are combined to yield a posterior that describes the parameters. The posterior can be represented by Monte Carlo sampling, which requires repeated evaluation of the posterior, which in turn requires repeated evaluation of the model. This is expensive if the model is complex or if the dimension of the parameters is high. Polynomial chaos expansions (PCE) have been used to reduce the computational cost by providing an approximate representation of the model based on the prior and, hence, creating a surrogate posterior. This surrogate posterior can be evaluated inexpensively and without solving the model. Here we investigate the accuracy of the surrogate posterior and PCE-based samplers. We show, by analysis of the small noise setting, that the surrogate posterior can be very different from the posterior when the data contains significant information beyond what is assumed in the prior. In this case, the PCE-based parameter estimates are inaccurate. The accuracy can be improved by adaptively increasing the order of the PCE, but the cost may increase too fast for this to be efficient. We illustrate the theory with an example from subsurface hydrodynamics in which we estimate the permeability on the basis of noisy pressure measurements. Our numerical results confirm what we found in theory and indicate that an advanced MC sampler which uses data to generate effective samples can be be more efficient than a PCE-based sampler.

  20. Robust Bayesian estimation of nonlinear parameters on SE(3) Lie group

    NASA Astrophysics Data System (ADS)

    Kuehnel, Frank O.

    2004-11-01

    The basic challenge in autonomous robotic exploration is to safely interact with natural environments. An essential part of that challenge is 3D map building. In robotics research this problem is addressed as simultaneous localization and mapping (SLAM). In computer vision it is termed structure from motion (SFM). The common underlying problem is the accurate estimation of the camera pose. Uncertainty information about the pose estimates is essential for a recursive inference scheme. We show that the pose parametrization plays an important role for the finite parametric representation. In the case of sparse observations (weak evidence) the full exponential Lie Cartan coordinates of 1.st kind are most suitable, when assuming a Gaussian noise model on the measurements. Further, we address the pose estimation from a sequence of images and introduce the marginalized MAP estimator, which is numerically more stable and efficient than the joint estimate (bundle-adjustment) used in computer vision.

  1. Automatic line detection in document images using recursive morphological transforms

    NASA Astrophysics Data System (ADS)

    Kong, Bin; Chen, Su S.; Haralick, Robert M.; Phillips, Ihsin T.

    1995-03-01

    In this paper, we describe a system that detects lines of various types, e.g., solid lines and dotted lines, on document images. The main techniques are based on the recursive morphological transforms, namely the recursive opening and closing transforms. The advantages of the transforms are that they can perform binary opening and closing with any sized structuring element simultaneously in constant time per pixel, and that they offer a solution to morphological image analysis problems where the sizes of the structuring elements have to be determined after the examination of the image itself. The system is evaluated on about 1,200 totally ground-truthed IRS tax form images of different qualities. The line detection output is compared with a set of hand-drawn groundtruth lines. The statistics like the number and rate of correct detection, miss detection, and false alarm are calculated. The performance of 32 algorithms for solid line detection are compared to find the best one. The optimal algorithm tuning parameter settings could be estimated on the fly using a regression tree.

  2. Estimation of genetic parameters for reproductive traits in Shall sheep.

    PubMed

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (P<0.05). Genetic parameters were estimated using restricted maximum likelihood procedure, under repeatability animal models. Direct heritability estimates were 0.02, 0.01, 0.47, 0.40, 0.15, and 0.03 for LSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep.

  3. Estimation of genetic parameters for reproductive traits in Shall sheep.

    PubMed

    Amou Posht-e-Masari, Hesam; Shadparvar, Abdol Ahad; Ghavi Hossein-Zadeh, Navid; Hadi Tavatori, Mohammad Hossein

    2013-06-01

    The objective of this study was to estimate genetic parameters for reproductive traits in Shall sheep. Data included 1,316 records on reproductive performances of 395 Shall ewes from 41 sires and 136 dams which were collected from 2001 to 2007 in Shall breeding station in Qazvin province at the Northwest of Iran. Studied traits were litter size at birth (LSB), litter size at weaning (LSW), litter mean weight per lamb born (LMWLB), litter mean weight per lamb weaned (LMWLW), total litter weight at birth (TLWB), and total litter weight at weaning (TLWW). Test of significance to include fixed effects in the statistical model was performed using the general linear model procedure of SAS. The effects of lambing year and ewe age at lambing were significant (P<0.05). Genetic parameters were estimated using restricted maximum likelihood procedure, under repeatability animal models. Direct heritability estimates were 0.02, 0.01, 0.47, 0.40, 0.15, and 0.03 for LSB, LSW, LMWLB, LMWLW, TLWB, and TLWW, respectively, and corresponding repeatabilities were 0.02, 0.01, 0.73, 0.41, 0.27, and 0.03. Genetic correlation estimates between traits ranged from -0.99 for LSW-LMWLW to 0.99 for LSB-TLWB, LSW-TLWB, and LSW-TLWW. Phenotypic correlations ranged from -0.71 for LSB-LMWLW to 0.98 for LSB-TLWW and environmental correlations ranged from -0.89 for LSB-LMWLW to 0.99 for LSB-TLWW. Results showed that the highest heritability estimates were for LMWLB and LMWLW suggesting that direct selection based on these traits could be effective. Also, strong positive genetic correlations of LMWLB and LMWLW with other traits may improve meat production efficiency in Shall sheep. PMID:23334381

  4. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  5. Estimation of Eruption Source Parameters from Plume Growth Rate

    NASA Astrophysics Data System (ADS)

    Pouget, Solene; Bursik, Marcus; Webley, Peter; Dehn, Jon; Pavalonis, Michael; Singh, Tarunraj; Singla, Puneet; Patra, Abani; Pitman, Bruce; Stefanescu, Ramona; Madankan, Reza; Morton, Donald; Jones, Matthew

    2013-04-01

    The eruption of Eyjafjallajokull, Iceland in April and May, 2010, brought to light the hazards of airborne volcanic ash and the importance of Volcanic Ash Transport and Dispersion models (VATD) to estimate the concentration of ash with time. These models require Eruption Source Parameters (ESP) as input, which typically include information about the plume height, the mass eruption rate, the duration of the eruption and the particle size distribution. However much of the time these ESP are unknown or poorly known a priori. We show that the mass eruption rate can be estimated from the downwind plume or umbrella cloud growth rate. A simple version of the continuity equation can be applied to the growth of either an umbrella cloud or the downwind plume. The continuity equation coupled with the momentum equation using only inertial and gravitational terms provides another model. Numerical modeling or scaling relationships can be used, as necessary, to provide values for unknown or unavailable parameters. Use of these models applied to data on plume geometry provided by satellite imagery allows for direct estimation of plume volumetric and mass growth with time. To test our methodology, we compared our results with five well-studied and well-characterized historical eruptions: Mount St. Helens, 1980; Pinatubo, 1991, Redoubt, 1990; Hekla, 2000 and Eyjafjallajokull, 2010. These tests show that the methodologies yield results comparable to or better than currently accepted methodologies of ESP estimation. We then applied the methodology to umbrella clouds produced by the eruptions of Okmok, 12 July 2008, and Sarychev Peak, 12 June 2009, and to the downwind plume produced by the eruptions of Hekla, 2000; Kliuchevsko'i, 1 October 1994; Kasatochi 7-8 August 2008 and Bezymianny, 1 September 2012. The new methods allow a fast, remote assessment of the mass eruption rate, even for remote volcanoes. They thus provide an additional path to estimation of the ESP and the forecasting

  6. Trapping phenomenon of the parameter estimation in asymptotic quantum states

    NASA Astrophysics Data System (ADS)

    Berrada, K.

    2016-09-01

    In this paper, we study in detail the behavior of the precision of the parameter estimation in open quantum systems using the quantum Fisher information (QFI). In particular, we study the sensitivity of the estimation on a two-qubit system evolving under Kossakowski-type quantum dynamical semigroups of completely positive maps. In such an environment, the precision of the estimation can even persist asymptotically for different effects of the initial parameters. We find that the QFI can be resistant to the action of the environment with respect to the initial asymptotic states, and it can persist even in the asymptotic long-time regime. In addition, our results provide further evidence that the initial pure and separable mixed states of the input state may enhance quantum metrology. These features make quantum states in this kind of environment a good candidate for the implementation of different schemes of quantum optics and information with high precision. Finally, we show that this quantity may be proposed to detect the amount of the total quantum information that the whole state contains with respect to projective measurements.

  7. Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2011-01-01

    This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.

  8. Exponential depression as a test of estimated decay parameters

    NASA Astrophysics Data System (ADS)

    Isenberg, Irvin; Small, Enoch W.

    1982-09-01

    A new test for judging the goodness of estimated decay parameters is presented. The test is based on the fact that a convolution is invariant under exponential depression. In the absence of significant error the estimated parameters will then remain constant as the degree of depression is varied over a finite range. In the presence of error, the parameters will vary. Up to now, no test has existed to see if moment index displacement corrects errors to a satisfactory extent in any given analysis. It has always been necessary to have some a priori knowledge of the type of error that limited the analysis. The test presented here removes that requirement. In addition, it is shown that the test performs better than a visual inspection of residual and autocorrelation plots in judging analyses when decays are closely spaced, even in the absence of nonrandom errors. The test is useful in accepting or rejecting analyses, with or without automatic error correction, in helping to discriminate between different models of sample decay, and in tuning pulse fluorometers for optimal performance. The test is, in principle, independent of the method of moments; it may be used with any method which needs only a small amount of computer time, and which is a statistically resistant procedure.

  9. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  10. Parameter Estimation Analysis for Hybrid Adaptive Fault Tolerant Control

    NASA Astrophysics Data System (ADS)

    Eshak, Peter B.

    Research efforts have increased in recent years toward the development of intelligent fault tolerant control laws, which are capable of helping the pilot to safely maintain aircraft control at post failure conditions. Researchers at West Virginia University (WVU) have been actively involved in the development of fault tolerant adaptive control laws in all three major categories: direct, indirect, and hybrid. The first implemented design to provide adaptation was a direct adaptive controller, which used artificial neural networks to generate augmentation commands in order to reduce the modeling error. Indirect adaptive laws were implemented in another controller, which utilized online PID to estimate and update the controller parameter. Finally, a new controller design was introduced, which integrated both direct and indirect control laws. This controller is known as hybrid adaptive controller. This last control design outperformed the two earlier designs in terms of less NNs effort and better tracking quality. The performance of online PID has an important role in the quality of the hybrid controller; therefore, the quality of the estimation will be of a great importance. Unfortunately, PID is not perfect and the online estimation process has some inherited issues; the online PID estimates are primarily affected by delays and biases. In order to ensure updating reliable estimates to the controller, the estimator consumes some time to converge. Moreover, the estimator will often converge to a biased value. This thesis conducts a sensitivity analysis for the estimation issues, delay and bias, and their effect on the tracking quality. In addition, the performance of the hybrid controller as compared to direct adaptive controller is explored. In order to serve this purpose, a simulation environment in MATLAB/SIMULINK has been created. The simulation environment is customized to provide the user with the flexibility to add different combinations of biases and delays to

  11. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  12. Estimation of Aircraft Nonlinear Unsteady Parameters From Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.

    1998-01-01

    Aerodynamic equations were formulated for an aircraft in one-degree-of-freedom large amplitude motion about each of its body axes. The model formulation based on indicial functions separated the resulting aerodynamic forces and moments into static terms, purely rotary terms and unsteady terms. Model identification from experimental data combined stepwise regression and maximum likelihood estimation in a two-stage optimization algorithm that can identify the unsteady term and rotary term if necessary. The identification scheme was applied to oscillatory data in two examples. The model identified from experimental data fit the data well, however, some parameters were estimated with limited accuracy. The resulting model was a good predictor for oscillatory and ramp input data.

  13. Area-to-point parameter estimation with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Murakami, Daisuke; Tsutsumi, Morito

    2015-07-01

    The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.

  14. Earth-moon system: Dynamics and parameter estimation

    NASA Technical Reports Server (NTRS)

    Breedlove, W. J., Jr.

    1975-01-01

    A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.

  15. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  16. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  17. Enhancing the Precision of Parameter Estimation in Band Gap

    NASA Astrophysics Data System (ADS)

    Huang, J.; Zhan, Q.; Liu, Z. K.

    2016-09-01

    Recently, the dynamics of quantum Fisher information(QFI) in various environment are investigated and many kinds of schemes to overcome the drawback of decoherence are designed. Here we propose the pseudomode method to enhance the phase parameter precision of optimal quantum estimation of a qubit coupled to a non-Markovian structured environment. We find that the QFI can be enhanced in the weak-coupling regime with non-perfect band gap and can be trapped permanently with a large value in the perfect band gap. The effects of qubit-pseudomode detuning and the spectrum of reservoir are discussed, a reasonable physical explanation is given, too.

  18. Confidence Region Estimation for Groundwater Parameter Identification Problems

    NASA Astrophysics Data System (ADS)

    Vugrin, K. W.; Swiler, L. P.; Roberts, R. M.

    2007-12-01

    This presentation focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F--test method, and a Log--Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well--test results. The confidence regions for each case study are analyzed and compared. Each of the three methods produce similar and reasonable confidence regions for the case study using synthetic data. The linear approximation method grossly overestimates the confidence region for the first groundwater parameter identification case study. The F--test and Log--Likelihood methods result in similar reasonable regions for this test case. For the second groundwater parameter identification case study, the linear approximation method produces a confidence region of reasonable size. In this test case, the F--test and Log--Likelihood methods generate disjoint confidence regions of reasonable size. The differing results, capabilities, and drawbacks of all three methods are discussed. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  19. A Modified Rodrigues Parameter-based Nonlinear Observer Design for Spacecraft Gyroscope Parameters Estimation

    NASA Astrophysics Data System (ADS)

    Yong, Kilyuk; Jo, Sujang; Bang, Hyochoong

    This paper presents a modified Rodrigues parameter (MRP)-based nonlinear observer design to estimate bias, scale factor and misalignment of gyroscope measurements. A Lyapunov stability analysis is carried out for the nonlinear observer. Simulation is performed and results are presented illustrating the performance of the proposed nonlinear observer under the condition of persistent excitation maneuver. In addition, a comparison between the nonlinear observer and alignment Kalman filter (AKF) is made to highlight favorable features of the nonlinear observer.

  20. How Good are our Source Parameter Estimates for Small Earthquakes?

    NASA Astrophysics Data System (ADS)

    Abercrombie, R. E.

    2002-12-01

    Measuring reliable and accurate source parameters for small earthquakes (M<3) is a long term goal for seismologists. Small earthquakes are important as they bridge the gap between laboratory measurements of stick-slip sliding and large damaging earthquakes. They also provide insights into the nucleation process of unstable slip. Unfortunately, uncertainties in such parameters as the stress drop and radiated energy of small earthquakes are as large as an order of magnitude. This is a consequence of the high frequency radiation (> 100 Hz) needed to resolve the source process. High frequency energy is severely attenuated and distorted along the ray path. The best records of small earthquakes are from deep (> 1km) boreholes and mines, where the waves are recorded before passing through the near-surface rocks. Abercrombie (1995) and Prejean & Ellsworth (2001) used such deep recordings to investigate source scaling and discovered that the radiated energy is a significantly smaller fraction of the total energy than for larger earthquakes. Richardson and Jordan (2002) obtained a similar result from seismograms recorded in deep mines. Ide and Beroza (2001) investigated the effect of limited recording bandwidth in such studies and found that there was evidence of selection bias. Recalculating the source parameters of earthquakes recorded in the Cajon Pass borehole, correcting for the limited bandwidth, does not remove the scale dependence. Ide et al. (2002) used empirical Green's function methods to improve source parameter estimates, and found that even deep borehole recording is not a guarantee of negligible site effects. Another problem is that the lack of multiple recordings of small earthquakes means that very simple source models have to be used to calculate source parameters. The rupture velocity must also be assumed. There are still significant differences (nearly a factor of 10 in stress drop) between the predictions of even the simple models commonly in use. Here I

  1. Genetic Parameter Estimation in Seedstock Swine Population for Growth Performances

    PubMed Central

    Choi, Jae Gwan; Cho, Chung Il; Choi, Im Soo; Lee, Seung Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho; Choy, Yun Ho

    2013-01-01

    The objective of this study was to estimate genetic parameters that are to be used for across-herd genetic evaluations of seed stock pigs at GGP level. Performance data with pedigree information collected from swine breeder farms in Korea were provided by Korea Animal Improvement Association (AIAK). Performance data were composed of final body weights at test days and ultrasound measures of back fat thickness (BF), rib eye area (EMA) and retail cut percentage (RCP). Breeds of swine tested were Landrace, Yorkshire and Duroc. Days to 90 kg body weight (DAYS90) were estimated with linear function of age and ADG calculated from body weights at test days. Ultrasound measures were taken with A-mode ultrasound scanners by trained technicians. Number of performance records after censoring outliers and keeping records pigs only born from year 2000 were of 78,068 Duroc pigs, 101,821 Landrace pigs and 281,421 Yorkshire pigs. Models included contemporary groups defined by the same herd and the same seasons of births of the same year, which was regarded as fixed along with the effect of sex for all traits and body weight at test day as a linear covariate for ultrasound measures. REML estimation was processed with REMLF90 program. Heritability estimates were 0.40, 0.32, 0.21 0.39 for DAYS90, ADG, BF, EMA, RCP, respectively for Duroc population. Respective heritability estimates for Landrace population were 0.43, 0.41, 0.22, and 0.43 and for Yorkshire population were 0.36, 0.38, 0.22, and 0.42. Genetic correlation coefficients of DAYS90 with BF, EMA, or RCP were estimated to be 0.00 to 0.09, −0.15 to −0.25, 0.22 to 0.28, respectively for three breeds populations. Genetic correlation coefficients estimated between BF and EMA was −0.33 to −0.39. Genetic correlation coefficient estimated between BF and RCP was high and negative (−0.78 to −0.85) but the environmental correlation coefficients between these two traits was medium and negative (near −0.35), which describes

  2. Hopf algebras and topological recursion

    NASA Astrophysics Data System (ADS)

    Esteves, João N.

    2015-11-01

    We consider a model for topological recursion based on the Hopf algebra of planar binary trees defined by Loday and Ronco (1998 Adv. Math. 139 293-309 We show that extending this Hopf algebra by identifying pairs of nearest neighbor leaves, and thus producing graphs with loops, we obtain the full recursion formula discovered by Eynard and Orantin (2007 Commun. Number Theory Phys. 1 347-452).

  3. Parameter estimation in space systems using recurrent neural networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Atiya, Amir F.; Sunkel, John W.

    1991-01-01

    The identification of time-varying parameters encountered in space systems is addressed, using artificial neural systems. A hybrid feedforward/feedback neural network, namely a recurrent multilayer perception, is used as the model structure in the nonlinear system identification. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard back-propagation-learning algorithm is modified and it is used for both the off-line and on-line supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying parameters of nonlinear dynamic systems is investigated by estimating the mass properties of a representative large spacecraft. The changes in the spacecraft inertia are predicted using a trained neural network, during two configurations corresponding to the early and late stages of the spacecraft on-orbit assembly sequence. The proposed on-line mass properties estimation capability offers encouraging results, though, further research is warranted for training and testing the predictive capabilities of these networks beyond nominal spacecraft operations.

  4. Periodic orbits of hybrid systems and parameter estimation via AD.

    SciTech Connect

    Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard

    2004-07-01

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical models of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance

  5. Estimation of genetic parameters for reproductive traits in alpacas.

    PubMed

    Cruz, A; Cervantes, I; Burgos, A; Morante, R; Gutiérrez, J P

    2015-12-01

    One of the main deficiencies affecting animal breeding programs in Peruvian alpacas is the low reproductive performance leading to low number of animals available to select from, decreasing strongly the selection intensity. Some reproductive traits could be improved by artificial selection, but very few information about genetic parameters exists for these traits in this specie. The aim of this study was to estimate genetic parameters for six reproductive traits in alpacas both in Suri (SU) and Huacaya (HU) ecotypes, as well as their genetic relationship with fiber and morphological traits. Dataset belonging to Pacomarca experimental farm collected between 2000 and 2014 was used. Number of records for age at first service (AFS), age at first calving (AFC), copulation time (CT), pregnancy diagnosis (PD), gestation length (GL), and calving interval (CI) were, respectively, 1704, 854, 19,770, 5874, 4290 and 934. Pedigree consisted of 7742 animals. Regarding reproductive traits, model of analysis included additive and residual random effects for all traits, and also permanent environmental effect for CT, PD, GL and CI traits, with color and year of recording as fixed effects for all the reproductive traits and also age at mating and sex of calf for GL trait. Estimated heritabilities, respectively for HU and SU were 0.19 and 0.09 for AFS, 0.45 and 0.59 for AFC, 0.04 and 0.05 for CT, 0.07 and 0.05 for PD, 0.12 and 0.20 for GL, and 0.14 and 0.09 for CI. Genetic correlations between them ranged from -0.96 to 0.70. No important genetic correlations were found between reproductive traits and fiber or morphological traits in HU. However, some moderate favorable genetic correlations were found between reproductive and either fiber and morphological traits in SU. According to estimated genetic correlations, some reproductive traits might be included as additional selection criteria in HU. PMID:26490188

  6. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  7. On-line estimation of concentration parameters in fermentation processes.

    PubMed

    Xiong, Zhi-hua; Huang, Guo-hong; Shao, Hui-he

    2005-06-01

    It has long been thought that bioprocess, with their inherent measurement difficulties and complex dynamics, posed almost insurmountable problems to engineers. A novel software sensor is proposed to make more effective use of those measurements that are already available, which enable improvement in fermentation process control. The proposed method is based on mixtures of Gaussian processes (GP) with expectation maximization (EM) algorithm employed for parameter estimation of mixture of models. The mixture model can alleviate computational complexity of GP and also accord with changes of operating condition in fermentation processes, i.e., it would certainly be able to examine what types of process-knowledge would be most relevant for local models' specific operating points of the process and then combine them into a global one. Demonstrated by on-line estimate of yeast concentration in fermentation industry as an example, it is shown that soft sensor based state estimation is a powerful technique for both enhancing automatic control performance of biological systems and implementing on-line monitoring and optimization.

  8. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  9. A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.

    ERIC Educational Resources Information Center

    Shoemaker, David M.

    Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

  10. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  11. US-based Drug Cost Parameter Estimation for Economic Evaluations

    PubMed Central

    Levy, Joseph F; Meek, Patrick D; Rosenberg, Marjorie A

    2014-01-01

    Introduction In the US, more than 10% of national health expenditures are for prescription drugs. Assessing drug costs in US economic evaluation studies is not consistent, as the true acquisition cost of a drug is not known by decision modelers. Current US practice focuses on identifying one reasonable drug cost and imposing some distributional assumption to assess uncertainty. Methods We propose a set of Rules based on current pharmacy practice that account for the heterogeneity of drug product costs. The set of products derived from our Rules, and their associated costs, form an empirical distribution that can be used for more realistic sensitivity analyses, and create transparency in drug cost parameter computation. The Rules specify an algorithmic process to select clinically equivalent drug products that reduce pill burden, use an appropriate package size, and assume uniform weighting of substitutable products. Three diverse examples show derived empirical distributions and are compared with previously reported cost estimates. Results The shapes of the empirical distributions among the three drugs differ dramatically, including multiple modes and different variation. Previously published estimates differed from the means of the empirical distributions. Published ranges for sensitivity analyses did not cover the ranges of the empirical distributions. In one example using lisinopril, the empirical mean cost of substitutable products was $444 (range $23–$953) as compared to a published estimate of $305 (range $51–$523). Conclusions Our Rules create a simple and transparent approach to create cost estimates of drug products and assess their variability. The approach is easily modified to include a subset of, or different weighting for, substitutable products. The derived empirical distribution is easily incorporated into one-way or probabilistic sensitivity analyses. PMID:25532826

  12. Excitations for Rapidly Estimating Flight-Control Parameters

    NASA Technical Reports Server (NTRS)

    Moes, Tim; Smith, Mark; Morelli, Gene

    2006-01-01

    A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the

  13. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, R.J.; Helsel, D.R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores.

  14. Virtual parameter-estimation experiments in Bioprocess-Engineering education.

    PubMed

    Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes

    2006-05-01

    Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well. PMID:16411072

  15. Parameter estimation for the distribution of single cell lag times.

    PubMed

    Baranyi, József; George, Susan M; Kutalik, Zoltán

    2009-07-01

    In Quantitative Microbial Risk Assessment, it is vital to understand how lag times of individual cells are distributed over a bacterial population. Such identified distributions can be used to predict the time by which, in a growth-supporting environment, a few pathogenic cells can multiply to a poisoning concentration level. We model the lag time of a single cell, inoculated into a new environment, by the delay of the growth function characterizing the generated subpopulation. We introduce an easy-to-implement procedure, based on the method of moments, to estimate the parameters of the distribution of single cell lag times. The advantage of the method is especially apparent for cases where the initial number of cells is small and random, and the culture is detectable only in the exponential growth phase.

  16. Enhancing parameter precision of optimal quantum estimation by quantum screening

    NASA Astrophysics Data System (ADS)

    Jiang, Huang; You-Neng, Guo; Qin, Xie

    2016-02-01

    We propose a scheme of quantum screening to enhance the parameter-estimation precision in open quantum systems by means of the dynamics of quantum Fisher information. The principle of quantum screening is based on an auxiliary system to inhibit the decoherence processes and erase the excited state to the ground state. By comparing the case without quantum screening, the results show that the dynamics of quantum Fisher information with quantum screening has a larger value during the evolution processes. Project supported by the National Natural Science Foundation of China (Grant No. 11374096), the Natural Science Foundation of Guangdong Province, China (Grants No. 2015A030310354), and the Project of Enhancing School with Innovation of Guangdong Ocean University (Grants Nos. GDOU2014050251 and GDOU2014050252).

  17. Multiphase flow parameter estimation based on laser scattering

    NASA Astrophysics Data System (ADS)

    Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.

    2015-07-01

    The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.

  18. CosmoHammer: Cosmological parameter estimation with the MCMC Hammer

    NASA Astrophysics Data System (ADS)

    Akeret, Joël; Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Csillaghy, André

    2013-08-01

    We study the benefits and limits of parallelised Markov chain Monte Carlo (MCMC) sampling in cosmology. MCMC methods are widely used for the estimation of cosmological parameters from a given set of observations and are typically based on the Metropolis-Hastings algorithm. Some of the required calculations can however be computationally intensive, meaning that a single long chain can take several hours or days to calculate. In practice, this can be limiting, since the MCMC process needs to be performed many times to test the impact of possible systematics and to understand the robustness of the measurements being made. To achieve greater speed through parallelisation, MCMC algorithms need to have short autocorrelation times and minimal overheads caused by tuning and burn-in. The resulting scalability is hence influenced by two factors, the MCMC overheads and the parallelisation costs. In order to efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure, we developed a Python framework called CosmoHammer which embeds emcee, an implementation by Foreman-Mackey et al. (2012) of the affine invariant ensemble sampler by Goodman and Weare (2010). We test the performance of CosmoHammer for cosmological parameter estimation from cosmic microwave background data. While Metropolis-Hastings is dominated by overheads, CosmoHammer is able to accelerate the sampling process from a wall time of 30 h on a dual core notebook to 16 min by scaling out to 2048 cores. Such short wall times for complex datasets open possibilities for extensive model testing and control of systematics.

  19. Expectation Maximisation based Kalman Filter parameter estimation of GRACE data

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Marcel; Holschneider, Matthias; Lorenz, Christof

    2015-04-01

    GRACE gravity field solutions have proven to be a great device to measure Earth's water storage variations. Nevertheless, if one tries to project the available satellite space-time structure of the time varying fields onto a global time varying field, the well known problem of aliasing appears, as it is manifested in the stripes of the inverse solution. This phenomenon is largely enforced through the use of global spacial modeling functions like spherical harmonics. One method to approach this problem is to apply the Kalman filter technique. This procedure requires knowledge of stochastic models of observations and process dynamics. However, Earth's gravity field is constantly changing in such a complex manner that it is impossible to accurately determine the correct process dynamic. The Ornstein-Uhlenbeck process was applied as a viable process dynamic. This process contains free hyper parameters, that need to be estimated by an Expectation- Minimization(EM) algorithm, allowing it to take into account an a-priori space-time correlation pattern to improve Kalman Filter estimations. The method was applied to unfiltered GRACE gaussian coefficients, using the intrinsic regularization abilities of the Kalman Filter itself. The result was a regularized potential field without additional hydrological information or other assumptions of the gravity field other than Kaula's Law.

  20. Forage quantity estimation from MERIS using band depth parameters

    NASA Astrophysics Data System (ADS)

    Ullah, Saleem; Yali, Si; Schlerf, Martin

    Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands

  1. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  2. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  3. Parameter estimation for boundary value problems by integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Kojima, Fumio

    1988-01-01

    This paper is concerned with the parameter estimation for boundary integral equations of the second kind. The parameter estimation technique through use of the spline collocation method is proposed. Based on the compactness assumption imposed on the parameter space, the convergence analysis for the numerical method of parameter estimation is discussed. The results obtained here are applied to a boundary parameter estimation for 2-D elliptic systems.

  4. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Astrophysics Data System (ADS)

    Godines, Cody R.; Manteufel, Randall D.

    2002-12-01

    , and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  5. Probabilistic Analysis and Density Parameter Estimation Within Nessus

    NASA Technical Reports Server (NTRS)

    Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)

    2002-01-01

    , and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.

  6. Multiangle dynamic light scattering analysis using an improved recursion algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Li, Wei; Wang, Wanyan; Zeng, Xianjiang; Chen, Junyao; Du, Peng; Yang, Kecheng

    2015-10-01

    Multiangle dynamic light scattering (MDLS) compensates for the low information in a single-angle dynamic light scattering (DLS) measurement by combining the light intensity autocorrelation functions from a number of measurement angles. Reliable estimation of PSD from MDLS measurements requires accurate determination of the weighting coefficients and an appropriate inversion method. We propose the Recursion Nonnegative Phillips-Twomey (RNNPT) algorithm, which is insensitive to the noise of correlation function data, for PSD reconstruction from MDLS measurements. The procedure includes two main steps: 1) the calculation of the weighting coefficients by the recursion method, and 2) the PSD estimation through the RNNPT algorithm. And we obtained suitable regularization parameters for the algorithm by using MR-L-curve since the overall computational cost of this method is sensibly less than that of the L-curve for large problems. Furthermore, convergence behavior of the MR-L-curve method is in general superior to that of the L-curve method and the error of MR-L-curve method is monotone decreasing. First, the method was evaluated on simulated unimodal lognormal PSDs and multimodal lognormal PSDs. For comparison, reconstruction results got by a classical regularization method were included. Then, to further study the stability and sensitivity of the proposed method, all examples were analyzed using correlation function data with different levels of noise. The simulated results proved that RNNPT method yields more accurate results in the determination of PSDs from MDLS than those obtained with the classical regulation method for both unimodal and multimodal PSDs.

  7. Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation

    ERIC Educational Resources Information Center

    Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting

    2011-01-01

    Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…

  8. Recursion relations for conformal blocks

    NASA Astrophysics Data System (ADS)

    Penedones, João; Trevisani, Emilio; Yamazaki, Masahito

    2016-09-01

    In the context of conformal field theories in general space-time dimension, we find all the possible singularities of the conformal blocks as functions of the scaling dimension Δ of the exchanged operator. In particular, we argue, using representation theory of parabolic Verma modules, that in odd spacetime dimension the singularities are only simple poles. We discuss how to use this information to write recursion relations that determine the conformal blocks. We first recover the recursion relation introduced in [1] for conformal blocks of external scalar operators. We then generalize this recursion relation for the conformal blocks associated to the four point function of three scalar and one vector operator. Finally we specialize to the case in which the vector operator is a conserved current.

  9. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  10. Clinical refinement of the automatic lung parameter estimator (ALPE).

    PubMed

    Thomsen, Lars P; Karbing, Dan S; Smith, Bram W; Murley, David; Weinreich, Ulla M; Kjærgaard, Søren; Toft, Egon; Thorgaard, Per; Andreassen, Steen; Rees, Stephen E

    2013-06-01

    The automatic lung parameter estimator (ALPE) method was developed in 2002 for bedside estimation of pulmonary gas exchange using step changes in inspired oxygen fraction (FIO₂). Since then a number of studies have been conducted indicating the potential for clinical application and necessitating systems evolution to match clinical application. This paper describes and evaluates the evolution of the ALPE method from a research implementation (ALPE1) to two commercial implementations (ALPE2 and ALPE3). A need for dedicated implementations of the ALPE method was identified: one for spontaneously breathing (non-mechanically ventilated) patients (ALPE2) and one for mechanically ventilated patients (ALPE3). For these two implementations, design issues relating to usability and automation are described including the mixing of gasses to achieve FIO₂ levels, and the automatic selection of FIO₂. For ALPE2, these improvements are evaluated against patients studied using the system. The major result is the evolution of the ALPE method into two dedicated implementations, namely ALPE2 and ALPE3. For ALPE2, the usability and automation of FIO₂ selection has been evaluated in spontaneously breathing patients showing that variability of gas delivery is 0.3 % (standard deviation) in 1,332 breaths from 20 patients. Also for ALPE2, the automated FIO2 selection method was successfully applied in 287 patient cases, taking 7.2 ± 2.4 min and was shown to be safe with only one patient having SpO₂ < 86 % when the clinician disabled the alarms. The ALPE method has evolved into two practical, usable systems targeted at clinical application, namely ALPE2 for spontaneously breathing patients and ALPE3 for mechanically ventilated patients. These systems may promote the exploration of the use of more detailed descriptions of pulmonary gas exchange in clinical practice.

  11. Use of Dual-wavelength Radar for Snow Parameter Estimates

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    2005-01-01

    Use of dual-wavelength radar, with properly chosen wavelengths, will significantly lessen the ambiguities in the retrieval of microphysical properties of hydrometeors. In this paper, a dual-wavelength algorithm is described to estimate the characteristic parameters of the snow size distributions. An analysis of the computational results, made at X and Ka bands (T-39 airborne radar) and at S and X bands (CP-2 ground-based radar), indicates that valid estimates of the median volume diameter of snow particles, D(sub 0), should be possible if one of the two wavelengths of the radar operates in the non-Rayleigh scattering region. However, the accuracy may be affected to some extent if the shape factors of the Gamma function used for describing the particle distribution are chosen far from the true values or if cloud water attenuation is significant. To examine the validity and accuracy of the dual-wavelength radar algorithms, the algorithms are applied to the data taken from the Convective and Precipitation-Electrification Experiment (CaPE) in 1991, in which the dual-wavelength airborne radar was coordinated with in situ aircraft particle observations and ground-based radar measurements. Having carefully co-registered the data obtained from the different platforms, the airborne radar-derived size distributions are then compared with the in-situ measurements and ground-based radar. Good agreement is found for these comparisons despite the uncertainties resulting from mismatches of the sample volumes among the different sensors as well as spatial and temporal offsets.

  12. Cosmological parameter estimation with large scale structure observations

    SciTech Connect

    Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Julien.Lesgourgues@cern.ch

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, C{sub ℓ}(z{sub 1},z{sub 2}), calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard P(k) analysis with the new C{sub ℓ}(z{sub 1},z{sub 2}) method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the P(k) analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, C{sub 0}(z{sub 1},z{sub 2})

  13. Anaerobic biodegradability of fish remains: experimental investigation and parameter estimation.

    PubMed

    Donoso-Bravo, Andres; Bindels, Francoise; Gerin, Patrick A; Vande Wouwer, Alain

    2015-01-01

    The generation of organic waste associated with aquaculture fish processing has increased significantly in recent decades. The objective of this study is to evaluate the anaerobic biodegradability of several fish processing fractions, as well as water treatment sludge, for tilapia and sturgeon species cultured in recirculated aquaculture systems. After substrate characterization, the ultimate biodegradability and the hydrolytic rate were estimated by fitting a first-order kinetic model with the biogas production profiles. In general, the first-order model was able to reproduce the biogas profiles properly with a high correlation coefficient. In the case of tilapia, the skin/fin, viscera, head and flesh presented a high level of biodegradability, above 310 mLCH₄gCOD⁻¹, whereas the head and bones showed a low hydrolytic rate. For sturgeon, the results for all fractions were quite similar in terms of both parameters, although viscera presented the lowest values. Both the substrate characterization and the kinetic analysis of the anaerobic degradation may be used as design criteria for implementing anaerobic digestion in a recirculating aquaculture system.

  14. Bayesian analysis of inflation: Parameter estimation for single field models

    SciTech Connect

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-02-15

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models ({phi}{sup n} with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  15. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  16. Estimation of uranium migration parameters in sandstone aquifers.

    PubMed

    Malov, A I

    2016-03-01

    The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

  17. Estimation of uranium migration parameters in sandstone aquifers.

    PubMed

    Malov, A I

    2016-03-01

    The chemical composition and isotopes of carbon and uranium were investigated in groundwater samples that were collected from 16 wells and 2 sources in the Northern Dvina Basin, Northwest Russia. Across the dataset, the temperatures in the groundwater ranged from 3.6 to 6.9 °C, the pH ranged from 7.6 to 9.0, the Eh ranged from -137 to +128 mV, the total dissolved solids (TDS) ranged from 209 to 22,000 mg L(-1), and the dissolved oxygen (DO) ranged from 0 to 9.9 ppm. The (14)C activity ranged from 0 to 69.96 ± 0.69 percent modern carbon (pmC). The uranium content in the groundwater ranged from 0.006 to 16 ppb, and the (234)U:(238)U activity ratio ranged from 1.35 ± 0.21 to 8.61 ± 1.35. The uranium concentration and (234)U:(238)U activity ratio increased from the recharge area to the redox barrier; behind the barrier, the uranium content is minimal. The results were systematized by creating a conceptual model of the Northern Dvina Basin's hydrogeological system. The use of uranium isotope dating in conjunction with radiocarbon dating allowed the determination of important water-rock interaction parameters, such as the dissolution rate:recoil loss factor ratio Rd:p (a(-1)) and the uranium retardation factor:recoil loss factor ratio R:p in the aquifer. The (14)C age of the water was estimated to be between modern and >35,000 years. The (234)U-(238)U age of the water was estimated to be between 260 and 582,000 years. The Rd:p ratio decreases with increasing groundwater residence time in the aquifer from n × 10(-5) to n × 10(-7) a(-1). This finding is observed because the TDS increases in that direction from 0.2 to 9 g L(-1), and accordingly, the mineral saturation indices increase. Relatively high values of R:p (200-1000) characterize aquifers in sandy-clayey sediments from the Late Pleistocene and the deepest parts of the Vendian strata. In samples from the sandstones of the upper part of the Vendian strata, the R:p value is ∼ 24, i.e., sorption processes are

  18. How Learning Logic Programming Affects Recursion Comprehension

    ERIC Educational Resources Information Center

    Haberman, Bruria

    2004-01-01

    Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural programming…

  19. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  20. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  1. Estimation of Nonlinear Elasticity Parameter of Tissues by Ultrasound

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Shiina, Tsuyoshi

    2002-05-01

    In this paper, a new parameter that quantifies the intensity of tissue nonlinear elasticity is introduced as the nonlinear elasticity parameter. This parameter is defined based on the empirical information that the nonlinear elastic behavior of soft tissues exhibits an exponential character. To visualize the quantitative nonlinear elasticity parameter, an ultrasonic imaging procedure involving the three-dimensional finite element method (3-D FEM) is presented. Experimental investigations that visualize the nonlinear elasticity parameter distribution of a chicken gizzard and a pig kidney embedded in a gelatin-based phantom were performed. The values extracted by ultrasound and 3-D FEM were compared with those measured by the direct mechanical compression test. Experimental results revealed that the nonlinear elasticity parameter values extracted by ultrasound and 3-D FEM exhibited good agreement with those measured by the mechanical compression test, and that the intensity of tissue nonlinear elasticity could be visualized quantitatively by the defined nonlinear elasticity parameter.

  2. A systematic review of lumped-parameter equivalent circuit models for real-time estimation of lithium-ion battery states

    NASA Astrophysics Data System (ADS)

    Nejad, S.; Gladwin, D. T.; Stone, D. A.

    2016-06-01

    This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.

  3. Estimating winter wheat phenological parameters: Implications for crop modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  4. Analytical estimation of the parameters of autodyne lidar.

    PubMed

    Koganov, Gennady A; Shuker, Reuben; Gordov, Evgueni P

    2002-11-20

    An analytical approach for a calculation of the parameters of autodyne lidar is presented. Approximate expressions connecting the absorption coefficient and the distance to the remote target with both the lidar parameters and the measured quantities are obtained. These expressions allow one to retrieve easily the information about the atmosphere from the experimental data. PMID:12463256

  5. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    SciTech Connect

    Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  6. Modeling relationships between calving traits: a comparison between standard and recursive mixed models

    PubMed Central

    2010-01-01

    Background The use of structural equation models for the analysis of recursive and simultaneous relationships between phenotypes has become more popular recently. The aim of this paper is to illustrate how these models can be applied in animal breeding to achieve parameterizations of different levels of complexity and, more specifically, to model phenotypic recursion between three calving traits: gestation length (GL), calving difficulty (CD) and stillbirth (SB). All recursive models considered here postulate heterogeneous recursive relationships between GL and liabilities to CD and SB, and between liability to CD and liability to SB, depending on categories of GL phenotype. Methods Four models were compared in terms of goodness of fit and predictive ability: 1) standard mixed model (SMM), a model with unstructured (co)variance matrices; 2) recursive mixed model 1 (RMM1), assuming that residual correlations are due to the recursive relationships between phenotypes; 3) RMM2, assuming that correlations between residuals and contemporary groups are due to recursive relationships between phenotypes; and 4) RMM3, postulating that the correlations between genetic effects, contemporary groups and residuals are due to recursive relationships between phenotypes. Results For all the RMM considered, the estimates of the structural coefficients were similar. Results revealed a nonlinear relationship between GL and the liabilities both to CD and to SB, and a linear relationship between the liabilities to CD and SB. Differences in terms of goodness of fit and predictive ability of the models considered were negligible, suggesting that RMM3 is plausible. Conclusions The applications examined in this study suggest the plausibility of a nonlinear recursive effect from GL onto CD and SB. Also, the fact that the most restrictive model RMM3, which assumes that the only cause of correlation is phenotypic recursion, performs as well as the others indicates that the phenotypic recursion

  7. Dynamic ventilation scintigraphy: a comparison of parameter estimation gating models.

    PubMed

    Hack, S N; Paoni, R A; Stratton, H; Valvano, M; Line, B R; Cooper, J A

    1988-11-01

    Two procedures for providing the synchronization of ventilation scintigraphic data to create dynamic displays of the pulmonary cycle are described and compared. These techniques are based on estimating instantaneous lung volume by pneumotachometry and by scintigraphy. Twenty-three patients were studied by these two techniques. The results indicate that the estimation of the times of end-inspiration and end-expiration are equivalent by the two techniques but the morphologies of the two estimated time-volume waveforms are not equivalent. Ventilation cinescintigraphy based on time division gating but not on isovolume division gating can be equivalently generated from list mode acquired data by employing either technique described.

  8. Recursive adjustment approach for the inversion of the Euler-Liouville Equation

    NASA Astrophysics Data System (ADS)

    Kirschner, S.; Seitz, F.

    2012-04-01

    Earth rotation is physically described by the Euler-Liouville Equation that is based on the balance of angular momentum in the Earth system. The Earth orientation parameters (EOP), polar motion and length of day, are highly precise observed by geodetic methods over many decades. A sensitivity analysis showed that some weakly determined Earth parameters have a great influence on the numerical forward modeling of the EOP. Therefore we concentrate on the inversion of the Euler-Liouville Equation in order to estimate and improve such parameters. A recursive adjustment approach allows the inversion of the Euler-Liouville Equation to be efficient. Here we concentrate on the estimation of parameters related to period and damping of the free rotation of the Earth (Chandler oscillation). Before we apply the approach to the complex Earth system we demonstrate its concept on the simplified example of a spring mass damper system. The spring mass damper system is analogous to the damped Chandler oscillation and the results can directly be transferred. Also the differential equation describing the motion of the spring has the same structure as the Euler-Liouville Equation. Spring constant and damping coefficient describing the anelastic behavior of the system correspond to real and imaginary part of the Earth's pole tide Love number. Therefore the simplified model is ideal for studying various aspects, e.g. the influences of sampling rate, overall time frame, and the number of observations on the numerical results. It is shown that the recursive adjustment approach is an adequate method for the estimation of the spring parameters and therewith for the parameters describing the Earth's rheology. The study is carried out in the frame of the German research unit on Earth Rotation and Global Dynamic Processes.

  9. Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants

    SciTech Connect

    Smith, R; Chung, P S; Steckel, J A; Jhon, M S; Biegler, L T

    2011-01-01

    The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraolwere evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  10. Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants

    SciTech Connect

    Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.

    2011-01-01

    The head disk interface in a hard disk drive can be considered to be one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models. In this paper, we investigate beyond molecular level and perform ab initio calculations to obtain the force field parameters. Intramolecular force field parameters for Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  11. Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants

    SciTech Connect

    Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.

    2011-01-01

    The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  12. Force field parameter estimation of functional perfluoropolyether lubricants

    SciTech Connect

    Smith, Robert; Seung Chung, Pil; Steckel, Janice A.; Jhon, Myung S.; Biegler, Lorenz T.

    2011-01-01

    The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.

  13. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  14. Stellar atmospheric parameter estimation using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  15. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  16. Estimating Building Simulation Parameters via Bayesian Structure Learning

    SciTech Connect

    Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards

    2013-01-01

    Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.

  17. Automated methods for estimation of sperm flagellar bending parameters.

    PubMed

    Brokaw, C J

    1984-01-01

    Parameters to describe flagellar bending patterns can be obtained by a microcomputer procedure that uses a set of parameters to synthesize model bending patterns, compares the model bending patterns with digitized and filtered data from flagellar photographs, and uses the Simplex method to vary the parameters until a solution with minimum root mean square differences between the model and the data is found. Parameters for Chlamydomonas bending patterns have been obtained from comparison of shear angle curves for the model and the data. To avoid the determination of the orientation of the basal end of the flagellum, which is required for calculation of shear angles, parameters for sperm flagella have been obtained by comparison of curves of curvature as a function of length for the model and for the data. A constant curvature model, modified from that originally used for Chlamydomonas flagella, has been used for obtaining parameters from sperm flagella, but the methods can be applied using other models for synthesizing the model bending patterns.

  18. The Problem of Bias in Person Parameter Estimation in Adaptive Testing

    ERIC Educational Resources Information Center

    Doebler, Anna

    2012-01-01

    It is shown that deviations of estimated from true values of item difficulty parameters, caused for example by item calibration errors, the neglect of randomness of item difficulty parameters, testlet effects, or rule-based item generation, can lead to systematic bias in point estimation of person parameters in the context of adaptive testing.…

  19. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  20. Recursive bias estimation and L2 boosting

    SciTech Connect

    Hengartner, Nicolas W; Cornillon, Pierre - Andre; Matzner - Lober, Eric

    2009-01-01

    This paper presents a general iterative bias correction procedure for regression smoothers. This bias reduction schema is shown to correspond operationally to the L{sub 2} Boosting algorithm and provides a new statistical interpretation for L{sub 2} Boosting. We analyze the behavior of the Boosting algorithm applied to common smoothers S which we show depend on the spectrum of I - S. We present examples of common smoother for which Boosting generates a divergent sequence. The statistical interpretation suggest combining algorithm with an appropriate stopping rule for the iterative procedure. Finally we illustrate the practical finite sample performances of the iterative smoother via a simulation study.

  1. Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements

    SciTech Connect

    Olama, Mohammed M; Djouadi, Seddik M; Li, Yanyan

    2008-07-01

    This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.

  2. Estimated genetic parameters for carcass traits of Brahman cattle.

    PubMed

    Riley, D G; Chase, C C; Hammond, A C; West, R L; Johnson, D D; Olson, T A; Coleman, S W

    2002-04-01

    Heritabilities and genetic and phenotypic correlations were estimated from feedlot and carcass data collected from Brahman calves (n = 504) in central Florida from 1996 to 2000. Data were analyzed using animal models in MTDFREML. Models included contemporary group (n = 44; groups of calves of the same sex, fed in the same pen, slaughtered on the same day) as a fixed effect and calf age in days at slaughter as a continuous variable. Estimated feedlot trait heritabilities were 0.64, 0.67, 0.47, and 0.26 for ADG, hip height at slaughter, slaughter weight, and shrink. The USDA yield grade estimated heritability was 0.71; heritabilities for component traits of yield grade, including hot carcass weight, adjusted 12th rib backfat thickness, loin muscle area, and percentage kidney, pelvic, and heart fat were 0.55, 0.63, 0.44, and 0.46, respectively. Heritability estimates for dressing percentage, marbling score, USDA quality grade, cutability, retail yield, and carcass hump height were 0.77, 0.44, 0.47, 0.71, 0.5, and 0.54, respectively. Estimated genetic correlations of adjusted 12th rib backfat thickness with ADG, slaughter weight, marbling score, percentage kidney, pelvic, and heart fat, and yield grade (0.49, 0.46, 0.56, 0.63, and 0.93, respectively) were generally larger than most literature estimates. Estimated genetic correlations of marbling score with ADG, percentage shrink, loin muscle area, percentage kidney, pelvic, and heart fat, USDA yield grade, cutability, retail yield, and carcass hump height were 0.28, 0.49, 0.44, 0.27, 0.45, -0.43, 0.27, and 0.43, respectively. Results indicate that sufficient genetic variation exists within the Brahman breed for design and implementation of effective selection programs for important carcass quality and yield traits. PMID:12008662

  3. Estimated genetic parameters for carcass traits of Brahman cattle.

    PubMed

    Riley, D G; Chase, C C; Hammond, A C; West, R L; Johnson, D D; Olson, T A; Coleman, S W

    2002-04-01

    Heritabilities and genetic and phenotypic correlations were estimated from feedlot and carcass data collected from Brahman calves (n = 504) in central Florida from 1996 to 2000. Data were analyzed using animal models in MTDFREML. Models included contemporary group (n = 44; groups of calves of the same sex, fed in the same pen, slaughtered on the same day) as a fixed effect and calf age in days at slaughter as a continuous variable. Estimated feedlot trait heritabilities were 0.64, 0.67, 0.47, and 0.26 for ADG, hip height at slaughter, slaughter weight, and shrink. The USDA yield grade estimated heritability was 0.71; heritabilities for component traits of yield grade, including hot carcass weight, adjusted 12th rib backfat thickness, loin muscle area, and percentage kidney, pelvic, and heart fat were 0.55, 0.63, 0.44, and 0.46, respectively. Heritability estimates for dressing percentage, marbling score, USDA quality grade, cutability, retail yield, and carcass hump height were 0.77, 0.44, 0.47, 0.71, 0.5, and 0.54, respectively. Estimated genetic correlations of adjusted 12th rib backfat thickness with ADG, slaughter weight, marbling score, percentage kidney, pelvic, and heart fat, and yield grade (0.49, 0.46, 0.56, 0.63, and 0.93, respectively) were generally larger than most literature estimates. Estimated genetic correlations of marbling score with ADG, percentage shrink, loin muscle area, percentage kidney, pelvic, and heart fat, USDA yield grade, cutability, retail yield, and carcass hump height were 0.28, 0.49, 0.44, 0.27, 0.45, -0.43, 0.27, and 0.43, respectively. Results indicate that sufficient genetic variation exists within the Brahman breed for design and implementation of effective selection programs for important carcass quality and yield traits.

  4. Telescoping strategies for improved parameter estimation of environmental simulation models

    NASA Astrophysics Data System (ADS)

    Matott, L. Shawn; Hymiak, Beth; Reslink, Camden; Baxter, Christine; Aziz, Shirmin

    2013-10-01

    The parameters of environmental simulation models are often inferred by minimizing differences between simulated output and observed data. Heuristic global search algorithms are a popular choice for performing minimization but many algorithms yield lackluster results when computational budgets are restricted, as is often required in practice. One way for improving performance is to limit the search domain by reducing upper and lower parameter bounds. While such range reduction is typically done prior to optimization, this study examined strategies for contracting parameter bounds during optimization. Numerical experiments evaluated a set of novel “telescoping” strategies that work in conjunction with a given optimizer to scale parameter bounds in accordance with the remaining computational budget. Various telescoping functions were considered, including a linear scaling of the bounds, and four nonlinear scaling functions that more aggressively reduce parameter bounds either early or late in the optimization. Several heuristic optimizers were integrated with the selected telescoping strategies and applied to numerous optimization test functions as well as calibration problems involving four environmental simulation models. The test suite ranged from simple 2-parameter surfaces to complex 100-parameter landscapes, facilitating robust comparisons of the selected optimizers across a variety of restrictive computational budgets. All telescoping strategies generally improved the performance of the selected optimizers, relative to baseline experiments that used no bounds reduction. Performance improvements varied but were as high as 38% for a real-coded genetic algorithm (RGA), 21% for shuffled complex evolution (SCE), 16% for simulated annealing (SA), 8% for particle swarm optimization (PSO), and 7% for dynamically dimensioned search (DDS). Inter-algorithm comparisons suggest that the SCE and DDS algorithms delivered the best overall performance. SCE appears well

  5. Empirical estimation of school siting parameter towards improving children's safety

    NASA Astrophysics Data System (ADS)

    Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.

    2014-02-01

    Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.

  6. Estimation of Stiffness Parameter on the Common Carotid Artery

    NASA Astrophysics Data System (ADS)

    Koya, Yoshiharu; Mizoshiri, Isao; Matsui, Kiyoaki; Nakamura, Takashi

    The arteriosclerosis is on the increase with an aging or change of our living environment. For that reason, diagnosis of the common carotid artery using echocardiogram is doing to take precautions carebropathy. Up to the present, several methods to measure stiffness parameter of the carotid artery have been proposed. However, they have analyzed at the only one point of common carotid artery. In this paper, we propose the method of analysis extended over a wide area of common carotid artery. In order to measure stiffness parameter of common carotid artery from echocardiogram, it is required to detect two border curves which are boundaries between vessel wall and blood. The method is composed of two steps. The first step is the detection of border curves, and the second step is the calculation of stiffness parameter using diameter of common carotid artery. Experimental results show the validity of the proposed method.

  7. Estimating genetic parameters in natural populations using the "animal model".

    PubMed Central

    Kruuk, Loeske E B

    2004-01-01

    Estimating the genetic basis of quantitative traits can be tricky for wild populations in natural environments, as environmental variation frequently obscures the underlying evolutionary patterns. I review the recent application of restricted maximum-likelihood "animal models" to multigenerational data from natural populations, and show how the estimation of variance components and prediction of breeding values using these methods offer a powerful means of tackling the potentially confounding effects of environmental variation, as well as generating a wealth of new areas of investigation. PMID:15306404

  8. Proper estimation of hydrological parameters from flood forecasting aspects

    NASA Astrophysics Data System (ADS)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Yanami, Hitoshi; Anai, Hirokazu

    2016-04-01

    The hydrological parameters of a flood forecasting model are normally calibrated based on an entire hydrograph of past flood events by means of an error assessment function such as mean square error and relative error. However, the specific parts of a hydrograph, i.e., maximum discharge and rising parts, are particularly important for practical flood forecasting in the sense that underestimation may lead to a more dangerous situation due to delay in flood prevention and evacuation activities. We conducted numerical experiments to find the most proper parameter set for practical flood forecasting without underestimation in order to develop an error assessment method for calibration appropriate for flood forecasting. A distributed hydrological model developed in Public Works Research Institute (PWRI) in Japan was applied to fifteen past floods in the Gokase River basin of 1,820km2 in Japan. The model with gridded two-layer tanks for the entire target river basin included hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which were set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and Digital Soil Map of the World (DSMW), were employed as input data for elevation, land use and soil type. The values of fourteen types of parameters were evenly sampled with 10,001 patterns of parameter sets determined by the Latin Hypercube Sampling within the search range of each parameter. Although the best reproduced case showed a high Nash-Sutcliffe Efficiency of 0.9 for all flood events, the maximum discharge was underestimated in many flood cases. Therefore, two conditions, which were non-underestimation in the maximum discharge and rising parts of a hydrograph, were added in calibration as the flood forecasting aptitudes. The cases with non-underestimation in the maximum discharge and rising parts of the hydrograph also showed a high Nash-Sutcliffe Efficiency of 0.9 except two flood cases

  9. Simultaneous parameter estimation and contaminant source characterization for coupled groundwater flow and contaminant transport modelling

    USGS Publications Warehouse

    Wagner, B.J.

    1992-01-01

    Parameter estimation and contaminant source characterization are key steps in the development of a coupled groundwater flow and contaminant transport simulation model. Here a methodologyfor simultaneous model parameter estimation and source characterization is presented. The parameter estimation/source characterization inverse model combines groundwater flow and contaminant transport simulation with non-linear maximum likelihood estimation to determine optimal estimates of the unknown model parameters and source characteristics based on measurements of hydraulic head and contaminant concentration. First-order uncertainty analysis provides a means for assessing the reliability of the maximum likelihood estimates and evaluating the accuracy and reliability of the flow and transport model predictions. A series of hypothetical examples is presented to demonstrate the ability of the inverse model to solve the combined parameter estimation/source characterization inverse problem. Hydraulic conductivities, effective porosity, longitudinal and transverse dispersivities, boundary flux, and contaminant flux at the source are estimated for a two-dimensional groundwater system. In addition, characterization of the history of contaminant disposal or location of the contaminant source is demonstrated. Finally, the problem of estimating the statistical parameters that describe the errors associated with the head and concentration data is addressed. A stage-wise estimation procedure is used to jointly estimate these statistical parameters along with the unknown model parameters and source characteristics. ?? 1992.

  10. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    EPA Science Inventory

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  11. A Simplified Estimation of Latent State--Trait Parameters

    ERIC Educational Resources Information Center

    Hagemann, Dirk; Meyerhoff, David

    2008-01-01

    The latent state-trait (LST) theory is an extension of the classical test theory that allows one to decompose a test score into a true trait, a true state residual, and an error component. For practical applications, the variances of these latent variables may be estimated with standard methods of structural equation modeling (SEM). These…

  12. Accuracy in parameter estimation in cluster randomized designs.

    PubMed

    Pornprasertmanit, Sunthud; Schneider, W Joel

    2014-09-01

    When planning to conduct a study, not only is it important to select a sample size that will ensure adequate statistical power, often it is important to select a sample size that results in accurate effect size estimates. In cluster-randomized designs (CRD), such planning presents special challenges. In CRD studies, instead of assigning individual objects to treatment conditions, objects are grouped in clusters, and these clusters are then assigned to different treatment conditions. Sample size in CRD studies is a function of 2 components: the number of clusters and the cluster size. Planning to conduct a CRD study is difficult because 2 distinct sample size combinations might be associated with similar costs but can result in dramatically different levels of statistical power and accuracy in effect size estimation. Thus, we present a method that assists researchers in finding the least expensive sample size combination that still results in adequate accuracy in effect size estimation. Alternatively, if researchers have a fixed budget, they can select the sample size combination that results in the most precise estimate of effect size. A free computer program that automates these procedures is available. PMID:25046449

  13. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  14. Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model

    ERIC Educational Resources Information Center

    Lamsal, Sunil

    2015-01-01

    Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…

  15. Experimental parameter estimation method for nonlinear viscoelastic composite material models: an application on arterial tissue.

    PubMed

    Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan

    2013-01-01

    This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.

  16. Discontinuous gradient algorithm for finite-time estimation of time-varying parameters

    NASA Astrophysics Data System (ADS)

    Rueda-Escobedo, Juan G.; Moreno, Jaime A.

    2016-09-01

    In this work, we present a discontinuous algorithm capable of estimating time-varying parameters in finite time. The measured output is assumed to be linear in the parameters, i.e. it corresponds to a linear parametric model. It is further assumed that the parameter variation is uniformly bounded, and that the regressor is sufficiently exciting as to make the parameters identifiable.

  17. Geo-Statistical Approach to Estimating Asteroid Exploration Parameters

    NASA Technical Reports Server (NTRS)

    Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles

    2011-01-01

    NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.

  18. Estimating stellar parameters and interstellar extinction from evolutionary tracks

    NASA Astrophysics Data System (ADS)

    Sichevsky, S.; Malkov, O.

    Developing methods for analyzing and extracting information from modern sky surveys is a challenging task in astrophysical studies. We study possibilities of parameterizing stars and interstellar medium from multicolor photometry performed in three modern photometric surveys: GALEX, SDSS, and 2MASS. For this purpose, we have developed a method to estimate stellar radius from effective temperature and gravity with the help of evolutionary tracks and model stellar atmospheres. In accordance with the evolution rate at every point of the evolutionary track, star formation rate, and initial mass function, a weight is assigned to the resulting value of radius that allows us to estimate the radius more accurately. The method is verified for the most populated areas of the Hertzsprung-Russell diagram: main-sequence stars and red giants, and it was found to be rather precise (for main-sequence stars, the average relative error of radius and its standard deviation are 0.03% and 3.87%, respectively).

  19. Estimation of groundwater recharge parameters by time series analysis.

    USGS Publications Warehouse

    Naff, R.L.; Gutjahr, A.L.

    1983-01-01

    A model is proposed that relates water level fluctuations in a Dupuit aquifer to effective precipitation at the top of the unsaturated zone. Effective precipitation, defined herein as that portion of precipitation which becomes recharge, is related to precipitation measured in a nearby gage by a two-parameter function. A second-order stationary assumption is used to connect the spectra of effective precipitation and water level fluctuations.-from Authors

  20. Epicentral parameter estimation from intensity data of uncertain accuracy

    NASA Astrophysics Data System (ADS)

    Bilham, R. G.; Singh, B.; Szeliga, W. M.; Hough, S.

    2009-12-01

    In parts of the world where seismic productivity is sparse we must rely on the historical record to provide estimates of future seismic hazard. Paleoseismic data can constrain location and magnitude for those earthquakes whose ruptures breach the surface, but for most earthquakes prior to 1900 almost all that is known of them comes from felt intensity data. Empirical approaches have hitherto been used to estimate both location and magnitude from these data, but isoseismal contouring methods, with or without computer aided interpolation are often biased by the user. Numerical methods to avoid isoseismal estimation appear to circumvent this bias but where we have been able to test such methods with sparse data we have found the location and magnitude can be critically dependent on a few key observations. The inclusion or rejection of these critical data again introduces a user bias. The most acceptable approach is to provide an uncertainty to each intensity observation, and to include this uncertainty in the calculations of probable location and magnitude. Examples, of this approach are provided for the Allah Bund 1819 and Kashmir 1555, 1885, and 2005 earthquakes in India.

  1. Marker-based estimation of genetic parameters in genomics.

    PubMed

    Hu, Zhiqiu; Yang, Rong-Cai

    2014-01-01

    Linear mixed model (LMM) analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS) as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets. PMID:25025305

  2. Marker-Based Estimation of Genetic Parameters in Genomics

    PubMed Central

    Hu, Zhiqiu; Yang, Rong-Cai

    2014-01-01

    Linear mixed model (LMM) analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS) as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing ‘big’ genomic data sets. PMID:25025305

  3. Estimability of geodetic parameters from space VLBI observables

    NASA Technical Reports Server (NTRS)

    Adam, Jozsef

    1990-01-01

    The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.

  4. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration.

    PubMed

    Chia, Yen Lin; Salzman, Peter; Plevritis, Sylvia K; Glynn, Peter W

    2004-12-01

    Simulation-based parameter estimation offers a powerful means of estimating parameters in complex stochastic models. We illustrate the application of these ideas in the setting of a natural history model for breast cancer. Our model assumes that the tumor growth process follows a geometric Brownian motion; parameters are estimated from the SEER registry. Our discussion focuses on the use of simulation for computing the maximum likelihood estimator for this class of models. The analysis shows that simulation provides a straightforward means of computing such estimators for models of substantial complexity.

  5. A new genetic fuzzy system approach for parameter estimation of ARIMA model

    NASA Astrophysics Data System (ADS)

    Hassan, Saima; Jaafar, Jafreezal; Belhaouari, Brahim S.; Khosravi, Abbas

    2012-09-01

    The Autoregressive Integrated moving Average model is the most powerful and practical time series model for forecasting. Parameter estimation is the most crucial part in ARIMA modeling. Inaccurate and wrong estimated parameters lead to bias and unacceptable forecasting results. Parameter optimization can be adopted in order to increase the demand forecasting accuracy. A paradigm of the fuzzy system and a genetic algorithm is proposed in this paper as a parameter estimation approach for ARIMA. The new approach will optimize the parameters by tuning the fuzzy membership functions with a genetic algorithm. The proposed Hybrid model of ARIMA and the genetic fuzzy system will yield acceptable forecasting results.

  6. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    ERIC Educational Resources Information Center

    Newman, Isadore; And Others

    A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…

  7. Estimation of distributional parameters for censored trace level water quality data. 2. Verification and applications

    USGS Publications Warehouse

    Helsel, D.R.; Gilliom, R.J.

    1986-01-01

    Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters.

  8. Parameter estimates in binary black hole collisions using neural networks

    NASA Astrophysics Data System (ADS)

    Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.

    2016-10-01

    We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.

  9. Surface Parameter Estimation using Interferometric Coherences between Different Polarisations

    NASA Astrophysics Data System (ADS)

    Hajnsek, I.; Alvarez-Perez, J.-L.; Papathanassiou, K. P.; Moreira, A.; Cloude, S. R.

    2003-04-01

    In this work the potential of using the interferometric coherence at different polarisations over surface scat- terers in order to extract information about surface parameters is investigated. For the first time the sensitivity of the indi- vidual coherence contributions to surface roughness and moisture conditions is discussed and simulated using a novel hy- brid polarimetric surface scattering model. The model itself consists of two components, a coherent part obtained from the extended Bragg model and an incoherent part obtained from the integral equation model. Finally, experimental airborne SAR data are used to validate the modeled elements of the Pauli scattering vector.

  10. Determining the Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1995-01-01

    An important part of building mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. In this work, an expression for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates with colored residuals is developed and validated. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle (HARV). As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, while conventional parameter accuracy measures were optimistic.

  11. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Haidong

    2016-10-01

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O (d +1 ) improvement for Hamiltonian parameter estimation on d -dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  12. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  13. Laboratory experiments for estimating chemical osmotic parameters of mudstones

    NASA Astrophysics Data System (ADS)

    Miyoshi, S.; Tokunaga, T.; Mogi, K.; Ito, K.; Takeda, M.

    2010-12-01

    Recent studies have quantitatively shown that mudstone can act as semi-permeable membrane and can generate abnormally high pore pressure in sedimentary basins. Reflection coefficient is one of the important properties that affect the chemical osmotic behavior of mudstones. However, not many quantitative studies on the reflection coefficient of mudstones have been done. We have developed a laboratory apparatus to observe chemical osmotic behavior, and a numerical simulation technique to estimate the reflection coefficient and other relating properties of mudstones. A core sample of siliceous mudstone obtained from the drilled core at Horonobe, Japan, was set into the apparatus and was saturated by 0.1mol/L sodium chloride solution. Then, the up-side reservoir was replaced with 0.05mol/L sodium chloride solution, and temporal changes of both pressure and concentration of the solution in both up-side and bottom-side reservoirs were measured. Using the data obtained from the experiment, we estimated the reflection coefficient, effective diffusion coefficient, hydraulic conductivity, and specific storage of the sample by fitting the numerical simulation results with the observed ones. A preliminary numerical simulation of groundwater flow and solute migration was conducted in the area where the core sample was obtained, using the reflection coefficient and other properties obtained from this study. The result suggested that the abnormal pore pressure observed in the region can be explained by the chemical osmosis.

  14. Estimation of forest parameters using airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Cohen, J.

    2015-12-01

    Methods for the estimation of forest characteristics by airborne laser scanning (ALS) data have been introduced by several authors. Tree height (TH) and canopy closure (CC) describing the forest properties can be used in forest, construction and industry applications, as well as research and decision making. The National Land Survey has been collecting ALS data from Finland since 2008 to generate a nationwide high resolution digital elevation model. Although this data has been collected in leaf-off conditions, it still has the potential to be utilized in forest mapping. A method where this data is used for the estimation of CC and TH in the boreal forest region is presented in this paper. Evaluation was conducted in eight test areas across Finland by comparing the results with corresponding Multi-Source National Forest Inventory (MS-NFI) datasets. The ALS based CC and TH maps were generally in a good agreement with the MS-NFI data. As expected, deciduous forests caused some underestimation in CC and TH, but the effect was not major in any of the test areas. The processing chain has been fully automated enabling fast generation of forest maps for different areas.

  15. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    PubMed Central

    Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  16. Simultaneous parameters identifiability and estimation of an E. coli metabolic network model.

    PubMed

    Pontes Freitas Alberton, Kese; Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  17. Estimating canopy fuel parameters for Atlantic Coastal Plain forest types.

    SciTech Connect

    Parresol, Bernard, R.

    2007-01-15

    Abstract It is necessary to quantify forest canopy characteristics to assess crown fire hazard, prioritize treatment areas, and design treatments to reduce crown fire potential. A number of fire behavior models such as FARSITE, FIRETEC, and NEXUS require as input four particular canopy fuel parameters: 1) canopy cover, 2) stand height, 3) crown base height, and 4) canopy bulk density. These canopy characteristics must be mapped across the landscape at high spatial resolution to accurately simulate crown fire. Currently no models exist to forecast these four canopy parameters for forests of the Atlantic Coastal Plain, a region that supports millions of acres of loblolly, longleaf, and slash pine forests as well as pine-broadleaf forests and mixed species broadleaf forests. Many forest cover types are recognized, too many to efficiently model. For expediency, forests of the Savannah River Site are categorized as belonging to 1 of 7 broad forest type groups, based on composition: 1) loblolly pine, 2) longleaf pine, 3) slash pine, 4) pine-hardwood, 5) hardwood-pine, 6) hardwoods, and 7) cypress-tupelo. These 7 broad forest types typify forests of the Atlantic Coastal Plain region, from Maryland to Florida.

  18. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is

  19. Being surveyed can change later behavior and related parameter estimates.

    PubMed

    Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Karlan, Dean S; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit

    2011-02-01

    Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314

  20. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  1. Being surveyed can change later behavior and related parameter estimates

    PubMed Central

    Zwane, Alix Peterson; Zinman, Jonathan; Van Dusen, Eric; Pariente, William; Null, Clair; Miguel, Edward; Kremer, Michael; Hornbeck, Richard; Giné, Xavier; Duflo, Esther; Devoto, Florencia; Crepon, Bruno; Banerjee, Abhijit

    2011-01-01

    Does completing a household survey change the later behavior of those surveyed? In three field studies of health and two of microlending, we randomly assigned subjects to be surveyed about health and/or household finances and then measured subsequent use of a related product with data that does not rely on subjects' self-reports. In the three health experiments, we find that being surveyed increases use of water treatment products and take-up of medical insurance. Frequent surveys on reported diarrhea also led to biased estimates of the impact of improved source water quality. In two microlending studies, we do not find an effect of being surveyed on borrowing behavior. The results suggest that limited attention could play an important but context-dependent role in consumer choice, with the implication that researchers should reconsider whether, how, and how much to survey their subjects. PMID:21245314

  2. Tumor parameter estimation considering the body geometry by thermography.

    PubMed

    Hossain, Shazzat; Mohammadi, Farah A

    2016-09-01

    Implementation of non-invasive, non-contact, radiation-free thermal diagnostic tools requires an accurate correlation between surface temperature and interior physiology derived from living bio-heat phenomena. Such associations in the chest, forearm, and natural and deformed breasts have been investigated using finite element analysis (FEA), where the geometry and heterogeneity of an organ are accounted for by creating anatomically-accurate FEA models. The quantitative links are involved in the proposed evolutionary methodology for forecasting unknown Physio-thermo-biological parameters, including the depth, size and metabolic rate of the underlying nodule. A Custom Genetic Algorithm (GA) is tailored to parameterize a tumor by minimizing a fitness function. The study has employed the finite element method to develop simulated data sets and gradient matrix. Furthermore, simulated thermograms are obtained by enveloping the data sets with ±10% random noise. PMID:27416548

  3. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  4. A bound for the smoothing parameter in certain well-known nonparametric density estimators

    NASA Technical Reports Server (NTRS)

    Terrell, G. R.

    1980-01-01

    Two classes of nonparametric density estimators, the histogram and the kernel estimator, both require a choice of smoothing parameter, or 'window width'. The optimum choice of this parameter is in general very difficult. An upper bound to the choices that depends only on the standard deviation of the distribution is described.

  5. Bias-compensation-based least-squares estimation with a forgetting factor for output error models with white noise

    NASA Astrophysics Data System (ADS)

    Wu, A. G.; Chen, S.; Jia, D. L.

    2016-05-01

    In this paper, the bias-compensation-based recursive least-squares (LS) estimation algorithm with a forgetting factor is proposed for output error models. First, for the unknown white noise, the so-called weighted average variance is introduced. With this weighted average variance, a bias-compensation term is first formulated to achieve the bias-eliminated estimates of the system parameters. Then, the weighted average variance is estimated. Finally, the final estimation algorithm is obtained by combining the estimation of the weighted average variance and the recursive LS estimation algorithm with a forgetting factor. The effectiveness of the proposed identification algorithm is verified by a numerical example.

  6. Two-Dimensional Advective Transport in Ground-Water Flow Parameter Estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of

  7. A clustering approach for estimating parameters of a profile hidden Markov model.

    PubMed

    Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz

    2013-01-01

    A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165

  8. Estimation of cauliflower mass transfer parameters during convective drying

    NASA Astrophysics Data System (ADS)

    Sahin, Medine; Doymaz, İbrahim

    2016-05-01

    The study was conducted to evaluate the effect of pre-treatments such as citric acid and hot water blanching and air temperature on drying and rehydration characteristics of cauliflower slices. Experiments were carried out at four different drying air temperatures of 50, 60, 70 and 80 °C with the air velocity of 2.0 m/s. It was observed that drying and rehydration characteristics of cauliflower slices were greatly influenced by air temperature and pre-treatment. Six commonly used mathematical models were evaluated to predict the drying kinetics of cauliflower slices. The Midilli et al. model described the drying behaviour of cauliflower slices at all temperatures better than other models. The values of effective moisture diffusivities (D eff ) were determined using Fick's law of diffusion and were between 4.09 × 10-9 and 1.88 × 10-8 m2/s. Activation energy was estimated by an Arrhenius type equation and was 23.40, 29.09 and 26.39 kJ/mol for citric acid, blanch and control samples, respectively.

  9. Adaptive neuro-fuzzy estimation of optimal lens system parameters

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Pavlović, Nenad T.; Shamshirband, Shahaboddin; Mat Kiah, Miss Laiha; Badrul Anuar, Nor; Idna Idris, Mohd Yamani

    2014-04-01

    Due to the popularization of digital technology, the demand for high-quality digital products has become critical. The quantitative assessment of image quality is an important consideration in any type of imaging system. Therefore, developing a design that combines the requirements of good image quality is desirable. Lens system design represents a crucial factor for good image quality. Optimization procedure is the main part of the lens system design methodology. Lens system optimization is a complex non-linear optimization task, often with intricate physical constraints, for which there is no analytical solutions. Therefore lens system design provides ideal problems for intelligent optimization algorithms. There are many tools which can be used to measure optical performance. One very useful tool is the spot diagram. The spot diagram gives an indication of the image of a point object. In this paper, one optimization criterion for lens system, the spot size radius, is considered. This paper presents new lens optimization methods based on adaptive neuro-fuzzy inference strategy (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated.

  10. Frequency-dependent core shifts and parameter estimation in Blazars

    NASA Astrophysics Data System (ADS)

    Agarwal, Aditi

    2016-07-01

    We study the core shift effect in the parsec-scale jet of blazars using the 4.8-36.8 GHz radio light curves obtained from four decades of continuous monitoring. From a piecewise Gaussian fit to each flare, time lags between the observation frequencies and spectral indices (α) based on peak amplitudes (A) are determined. Index k is calculated and found to be ˜1, indicating equipartition between the magnetic field energy density and the particle energy density. A mean magnetic field strength at 1 pc (B1) and at the core (Bcore) are inferred which are found to be consistent with previous estimates. The measure of core position offset is also performed by averaging over all frequency pairs. Based on the statistical trend shown by the measured core radius as a function of frequency, we infer that the synchrotron opacity model may not be valid for all cases. A Fourier periodogram analysis yields power-law slopes in the range -1.6 to -3.5 describing the power spectral density shape and gives bend timescales. This result, and both positive and negative spectral indices, indicate that the flares originate from multiple shocks in a small region. Important objectives met in our study include: the demonstration of the computational efficiency and statistical basis of the piecewise Gaussian fit; consistency with previously reported results; evidence for the core shift dependence on observation frequency and its utility in jet diagnostics in the region close to the resolving limit of very long baseline interferometry observations.

  11. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  12. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  13. Estimating cotton growth and developmental parameters through remote sensing

    NASA Astrophysics Data System (ADS)

    Reddy, K. Raja; Zhao, Duli; Kakani, Vijaya Gopal; Read, John J.; Sailaja, K.

    2004-01-01

    Three field experiments of nitrogen (N) rates, plant growth regulator (PIX) applications, and irrigation regimes were conducted in 2001 and 2002 to investigate relationships between hyperspectral reflectance (400-2500 nm) and cotton (Gossypium hirsutum L.) growth, physiology, and yield. Leaf and canopy spectral reflectance and leaf N concentration were measured weekly or biweekly during the growing season. Plant height, mainstem nodes, leaf area, and aboveground biomass were also determined by harvesting 1-m row plants in each plot at different growth stages. Cotton seed and lint yields were obtained by mechanical harvest. From canopy hyperspectral reflectance data, several reflectance indices, including simple ratio (SR) and normalized difference vegetation index (NDVI), were calculated. Linear relationships were found between leaf N concentration and a ratio of leaf reflectance at wavelengths 517 and 413 nm (R517/R413) (r2 = 0.70, n = 150). Nitrogen deficiency significantly increased leaf and canopy reflectance in the visible range. Plant height and mainstem nodes were related closely to a SR (R750/R550) according to either a logarithmic or linear function (r2 = 0.63~0.68). The relationships between LAI or biomass and canopy reflectance could be expressed in an exponential fashion with the SR or NDVI [(R935-R661)/(R935+R661)] (r2 = 0.67~0.78). Lint yields were highly correlated with the NDVI around the first flower stage (r2 = 0.64). Therefore, leaf reflectance ratio of R517/R413 may be used to estimate leaf N concentration. The NDVI around first flower stage may provide a useful tool to predict lint yield in cotton.

  14. Bayesian estimation of regularization parameters for deformable surface models

    SciTech Connect

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-02-20

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.

  15. ESTIMATION OF RELATIVISTIC ACCRETION DISK PARAMETERS FROM IRON LINE EMISSION

    SciTech Connect

    V. PARIEV; B. BROMLEY; W. MILLER

    2001-03-01

    The observed iron K{alpha} fluorescence lines in Seyfert I galaxies provide strong evidence for an accretion disk near a supermassive black hole as a source of the emission. Here we present an analysis of the geometrical and kinematic properties of the disk based on the extreme frequency shifts of a line profile as determined by measurable flux in both the red and blue wings. The edges of the line are insensitive to the distribution of the X-ray flux over the disk, and hence provide a robust alternative to profile fitting of disk parameters. Our approach yields new, strong bounds on the inclination angle of the disk and the location of the emitting region. We apply our method to interpret observational data from MCG-6-30-15 and find that the commonly assumed inclination 30{degree} for the accretion disk in MCG-6-30-15 is inconsistent with the position of the blue edge of the line at a 3{sigma} level. A thick turbulent disk model or the presence of highly ionized iron may reconcile the bounds on inclination from the line edges with the full line profile fits based on simple, geometrically thin disk models. The bounds on the innermost radius of disk emission indicate that the black hole in MCG-6-30-15 is rotating faster than 30% of theoretical maximum. When applied to data from NGC 4151, our method gives bounds on the inclination angle of the X-ray emitting inner disk of 50 {+-} 10{degree}, consistent with the presence of an ionization cone grazing the disk as proposed by Pedlar et al. (1993). The frequency extrema analysis also provides limits to the innermost disk radius in another Seyfert 1 galaxy, NGC 3516, and is suggestive of a thick disk model.

  16. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  17. Synchronization-based approach for estimating all model parameters of chaotic systems

    NASA Astrophysics Data System (ADS)

    Konnur, Rahul

    2003-02-01

    The problem of dynamic estimation of all parameters of a model representing chaotic and hyperchaotic systems using information from a scalar measured output is solved. The variational calculus based method is robust in the presence of noise, enables online estimation of the parameters and is also able to rapidly track changes in operating parameters of the experimental system. The method is demonstrated using the Lorenz, Rossler chaos, and hyperchaos models. Its possible application in decoding communications using chaos is discussed.

  18. Hydrological Parameter Estimation (HYPE) System for Bayesian Exploration of Parameter Sensitivities in an Arctic Watershed

    NASA Astrophysics Data System (ADS)

    Morton, D.; Bolton, W. R.; Endalamaw, A. M.; Young, J. M.; Hinzman, L. D.

    2014-12-01

    As part of a study on how vegetation water use and permafrost dynamics impact stream flow in the boreal forest discontinuous permafrost zone, a Bayesian modeling framework has been developed to assess the effect of parameter uncertainties in an integrated vegetation water use and simple, first-order, non-linear hydrological model. Composed of a front-end Bayes driver and a backend interactive hydrological model, the system is meant to facilitate rapid execution of seasonal simulations driven by hundreds to thousands of parameter variations to analyze the sensitivity of the system to a varying parameter space in order to derive more effective parameterizations for larger-scale simulations. The backend modeling component provides an Application Programming Interface (API) for introducing parameters in the form of constant or time-varying scalars or spatially distributed grids. In this work, we describe the basic structure of the flexible, object-oriented modeling system and test its performance against collected basin data from headwater catchments of varying permafrost extent and ecosystem structure (deciduous versus coniferous vegetation). We will also analyze model and sub-model (evaporation, transpiration, precipitation and streamflow) sensitivity to parameters through application of the system to two catchment basins of the Caribou-Poker Creeks Research Watershed (CPCRW) located in Interior Alaska. The C2 basin is a mostly permafrost-free, south facing catchment dominated by deciduous vegetation. The C3 basin is underlain by more than 50% permafrost and is dominated by coniferous vegetation. The ultimate goal of the modeling system is to improve parameterizations in mesoscale hydrologic models, and application of the HYPE system to the well-instrumented CPCRW provides a valuable opportunity for experimentation.

  19. EM algorithm in estimating the 2- and 3-parameter Burr Type III distributions

    NASA Astrophysics Data System (ADS)

    Ismail, Nor Hidayah Binti; Khalid, Zarina Binti Mohd

    2014-07-01

    The Burr Type III distribution has been applied in the study of income, wage and wealth. It is suitable to fit lifetime data since it has flexible shape and controllable scale parameters. The popularity of Burr Type III distribution increases because it has included the characteristics of other distributions such as logistic and exponential. Burr Type III distribution has two categories: First a two-parameter distribution which has two shape parameters and second a three-parameter distribution which has a scale and two shape parameters. Expectation-maximization (EM) algorithm method is selected in this paper to estimate the two- and three-parameter Burr Type III distributions. Complete and censored data are simulated based on the derivation of pdf and cdf in parametric form of Burr Type III distributions. Then, the EM estimates are compared with estimates from maximum likelihood estimation (MLE) approach through mean square error. The best approach results in estimates with a higher approximation to the true parameters are determined. The result shows that the EM algorithm estimates perform better than the MLE estimates for two- and three-parameter Burr Type III distributions in the presence of complete and censored data.

  20. The Impact of Fallible Item Parameter Estimates on Latent Trait Recovery

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai

    2010-01-01

    In this paper we propose an upward correction to the standard error (SE) estimation of theta[subscript ML], the maximum likelihood (ML) estimate of the latent trait in item response theory (IRT). More specifically, the upward correction is provided for the SE of theta[subscript ML] when item parameter estimates obtained from an independent pretest…

  1. Parameter estimation using carbon-14 ages: Lessons from the Danube-Tisza interfluvial region of Hungary

    USGS Publications Warehouse

    Sanford, W.E.; Deak, J.; Revesz, K.

    2002-01-01

    Parameter estimation was conducted on a groundwater model of the Danube-Tisza interfluvial region of Hungary. The model was calibrated using 300 water levels and 48 14C ages. The model provided a test of regression methods for a system with a large number of observations. Up to 103 parameters representing horizontal and vertical hydraulic conductivities and boundary conductances were assigned using point values and bilinear interpolation between points. The lowest errors were obtained using an iterative approach with groups of parameters, rather than estimating all of the parameters simultaneously. The model with 48 parameters yielded the lowest standard error of regression.

  2. Use of timesat to estimate phenological parameters in Northwestern Patagonia

    NASA Astrophysics Data System (ADS)

    Oddi, Facundo; Minotti, Priscilla; Ghermandi, Luciana; Lasaponara, Rosa

    2015-04-01

    Under a global change context, ecosystems are receiving high pressure and the ecology science play a key role for monitoring and assessment of natural resources. To achieve an effective resources management to develop an ecosystem functioning knowledge based on spatio-temporal perspective is useful. Satellite imagery periodically capture the spectral response of the earth and remote sensing have been widely utilized as classification and change detection tool making possible evaluate the intra and inter-annual plant dynamics. Vegetation spectral indices (e.g., NDVI) are particularly suitable to study spatio-temporal processes related to plant phenology and remote sensing specific software, such as TIMESAT, has been developed to carry out time series analysis of spectral indexes. We used TIMESAT software applied to series of 25 years of NDVI bi-monthly composites (240 images covering the period 1982-2006) from the NOAA-AVHRR sensor (8 x 8 km) to assessment plant pheonology over 900000 ha of shrubby-grasslands in the Northwestern of Patagonia, Argentina. The study area corresponds to a Mediterranean environment and is part of a gradient defined by a sharp drop west-east in the precipitation regime (600 mm to 280 mm). We fitted the temporal series of NDVI data to double logistic functions by least-squares methods evaluating three seasonality parameters: a) start of growing season, b) growing season length, c) NDVI seasonal integral. According to fitted models by TIMESAT, start average of growing season was the second half of September (± 10 days) with beginnings latest in the east (dryer areas). The average growing season length was 180 days (± 15 days) without a clear spatial trend. The NDVI seasonal integral showed a clear trend of decrease in west-east direction following the precipitation gradient. The temporal and spatial information allows revealing important patterns of ecological interest, which can be of great importance to environmental monitoring. In this

  3. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  4. Using Spreadsheets to Help Students Think Recursively

    ERIC Educational Resources Information Center

    Webber, Robert P.

    2012-01-01

    Spreadsheets lend themselves naturally to recursive computations, since a formula can be defined as a function of one of more preceding cells. A hypothesized closed form for the "n"th term of a recursive sequence can be tested easily by using a spreadsheet to compute a large number of the terms. Similarly, a conjecture about the limit of a series…

  5. The Recursive Paradigm: Suppose We Already Knew.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1995-01-01

    Explains the recursive model in discrete mathematics through five examples and problems. Discusses the relationship between the recursive model, mathematical induction, and inductive reasoning and the relevance of these concepts in the school curriculum. Provides ideas for approaching this material with students. (Author/DDD)

  6. The Effects on Parameter Estimation of Correlated Abilities Using a Two-Dimensional, Two-Parameter Logistic Item Response Model.

    ERIC Educational Resources Information Center

    Batley, Rose-Marie; Boss, Marvin W.

    The effects of correlated dimensions on parameter estimation were assessed, using a two-dimensional item response theory model. Past research has shown the inadequacies of the unidimensional analysis of multidimensional item response data. However, few studies have reported multidimensional analysis of multidimensional data, and, in those using…

  7. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  8. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  9. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter

    PubMed Central

    Reddy, Chinthala P.; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956

  10. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter.

    PubMed

    Reddy, Chinthala P; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956

  11. Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter.

    PubMed

    Reddy, Chinthala P; Rathi, Yogesh

    2016-01-01

    Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.

  12. Conjugate gradient algorithms using multiple recursions

    SciTech Connect

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  13. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  14. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  15. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    NASA Astrophysics Data System (ADS)

    Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

    2016-03-01

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO-ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO-ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO-ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO-ACO is a very powerful tool for parameter estimation with high accuracy and low deviations.

  16. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    PubMed

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  17. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  18. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  19. Estimability and dependency analysis of model parameters based on delay coordinates

    NASA Astrophysics Data System (ADS)

    Schumann-Bischoff, J.; Luther, S.; Parlitz, U.

    2016-09-01

    In data-driven system identification, values of parameters and not observed variables of a given model of a dynamical system are estimated from measured time series. We address the question of estimability and redundancy of parameters and variables, that is, whether unique results can be expected for the estimates or whether, for example, different combinations of parameter values would provide the same measured output. This question is answered by analyzing the null space of the linearized delay coordinates map. Examples with zero-dimensional, one-dimensional, and two-dimensional null spaces are presented employing the Hindmarsh-Rose model, the Colpitts oscillator, and the Rössler system.

  20. Parameters estimation of sandwich beam model with rigid polyurethane foam core

    NASA Astrophysics Data System (ADS)

    Barbieri, Nilson; Barbieri, Renato; Winikes, Luiz Carlos

    2010-02-01

    In this work, the physical parameters of sandwich beams made with the association of hot-rolled steel, Polyurethane rigid foam and High Impact Polystyrene, used for the assembly of household refrigerators and food freezers are estimated using measured and numeric frequency response functions (FRFs). The mathematical models are obtained using the finite element method (FEM) and the Timoshenko beam theory. The physical parameters are estimated using the amplitude correlation coefficient and genetic algorithm (GA). The experimental data are obtained using the impact hammer and four accelerometers displaced along the sample (cantilevered beam). The parameters estimated are Young's modulus and the loss factor of the Polyurethane rigid foam and the High Impact Polystyrene.

  1. GEODYN system description, volume 1. [computer program for estimation of orbit and geodetic parameters

    NASA Technical Reports Server (NTRS)

    Chin, M. M.; Goad, C. C.; Martin, T. V.

    1972-01-01

    A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.

  2. Estimation of finite population parameters with auxiliary information and response error.

    PubMed

    González, L M; Singer, J M; Stanek, E J

    2014-10-01

    We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error.

  3. Design of a recursive vector processor using polynomial splines

    NASA Technical Reports Server (NTRS)

    Kim, C. S.; Shen, C. N.

    1980-01-01

    The problem of obtaining smoothed estimates of function values, particularly their derivatives, from a finite set of inaccurate measurements is considered. A recursive two-dimensional vector processor is introduced as an approximation to the nonrecursive constrained least-squares estimation. Here, piecewise bicubic Hermite polynomials are extensively used as approximating functions, and the smoothing integral is converted to a discrete quadratic form. This makes it possible to convert the problem of fitting an approximating function to one of estimating the function values and derivatives at the nodes.

  4. Change-point detection for recursive Bayesian geoacoustic inversions.

    PubMed

    Tan, Bien Aik; Gerstoft, Peter; Yardim, Caglar; Hodgkiss, William S

    2015-04-01

    In order to carry out geoacoustic inversion in low signal-to-noise ratio (SNR) conditions, extended duration observations coupled with source and/or receiver motion may be necessary. As a result, change in the underlying model parameters due to time or space is anticipated. In this paper, an inversion method is proposed for cases when the model parameters change abruptly or slowly. A model parameter change-point detection method is developed to detect the change in the model parameters using the importance samples and corresponding weights that are already available from the recursive Bayesian inversion. If the model parameters change abruptly, a change-point will be detected and the inversion will restart with the pulse measurement after the change-point. If the model parameters change gradually, the inversion (based on constant model parameters) may proceed until the accumulated model parameter mismatch is significant and triggers the detection of a change-point. These change-point detections form the heuristics for controlling the coherent integration time in recursive Bayesian inversion. The method is demonstrated in simulation with parameters corresponding to the low SNR, 100-900 Hz linear frequency modulation pulses observed in the Shallow Water 2006 experiment [Tan, Gerstoft, Yardim, and Hodgkiss, J. Acoust. Soc. Am. 136, 1187-1198 (2014)].

  5. Modeling of Aircraft Unsteady Aerodynamic Characteristics/Part 3 - Parameters Estimated from Flight Data. Part 3; Parameters Estimated from Flight Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Noderer, Keith D.

    1996-01-01

    A nonlinear least squares algorithm for aircraft parameter estimation from flight data was developed. The postulated model for the analysis represented longitudinal, short period motion of an aircraft. The corresponding aerodynamic model equations included indicial functions (unsteady terms) and conventional stability and control derivatives. The indicial functions were modeled as simple exponential functions. The estimation procedure was applied in five examples. Four of the examples used simulated and flight data from small amplitude maneuvers to the F-18 HARV and X-31A aircraft. In the fifth example a rapid, large amplitude maneuver of the X-31 drop model was analyzed. From data analysis of small amplitude maneuvers ft was found that the model with conventional stability and control derivatives was adequate. Also, parameter estimation from a rapid, large amplitude maneuver did not reveal any noticeable presence of unsteady aerodynamics.

  6. Least-squares sequential parameter and state estimation for large space structures

    NASA Technical Reports Server (NTRS)

    Thau, F. E.; Eliazov, T.; Montgomery, R. C.

    1982-01-01

    This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.

  7. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  8. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  9. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  10. Optimal waveform-based clutter suppression algorithm for recursive synthetic aperture radar imaging systems

    NASA Astrophysics Data System (ADS)

    Zhu, Binqi; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-04-01

    A computational method for suppressing clutter and generating clear microwave images of targets is proposed in this paper, which combines synthetic aperture radar (SAR) principles with recursive method and waveform design theory, and it is suitable for SAR for special applications. The nonlinear recursive model is introduced into the SAR operation principle, and the cubature Kalman filter algorithm is used to estimate target and clutter responses in each azimuth position based on their previous states, which are both assumed to be Gaussian distributions. NP criteria-based optimal waveforms are designed repeatedly as the sensor flies along its azimuth path and are used as the transmitting signals. A clutter suppression filter is then designed and added to suppress the clutter response while maintaining most of the target response. Thus, with fewer disturbances from the clutter response, we can generate the SAR image with traditional azimuth matched filters. Our simulations show that the clutter suppression filter significantly reduces the clutter response, and our algorithm greatly improves the SINR of the SAR image based on different clutter suppression filter parameters. As such, this algorithm may be preferable for special target imaging when prior information on the target is available.

  11. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  12. EEG and MEG source localization using recursively applied (RAP) MUSIC

    SciTech Connect

    Mosher, J.C.; Leahy, R.M.

    1996-12-31

    The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.

  13. ESTIMATION OF PHYSICAL PROPERTIES AND CHEMICAL REACTIVITY PARAMETERS OF ORGANIC COMPOUNDS

    EPA Science Inventory

    The computer program SPARC (Sparc Performs Automated Reasoning in Chemistry)has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC uses computational algorithms ...

  14. PARAMETER ESTIMATION OF TWO-FLUID CAPILLARY PRESSURE-SATURATION AND PERMEABILITY FUNCTIONS

    EPA Science Inventory

    Capillary pressure and permeability functions are crucial to the quantitative description of subsurface flow and transport. Earlier work has demonstrated the feasibility of using the inverse parameter estimation approach in determining these functions if both capillary pressure ...

  15. Effect of Medium Symmetries on Limiting the Number of Parameters Estimated with Polarimetric SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1999-01-01

    The addition of interferometric backscattering pairs to the conventional polarimetric synthetic aperture radar (SAR) data over forests and other vegetated areas increases the dimensionality of the data space, in principle enabling the estimation of a larger number of parameters.

  16. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  17. Likelihood parameter estimation for calibrating a soil moisture using radar backscatter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assimilating soil moisture information contained in synthetic aperture radar imagery into land surface model predictions can be done using a calibration, or parameter estimation, approach. The presence of speckle, however, necessitates aggregating backscatter measurements over large land areas in or...

  18. A robust methodology for kinetic model parameter estimation for biocatalytic reactions.

    PubMed

    Al-Haque, Naweed; Santacoloma, Paloma A; Neto, Watson; Tufvesson, Pär; Gani, Rafiqul; Woodley, John M

    2012-01-01

    Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several parameters, which are strongly correlated with each other. State-of-the-art methodologies such as nonlinear regression (using progress curves) or graphical analysis (using initial rate data, for example, the Lineweaver-Burke plot, Hanes plot or Dixon plot) often incorporate errors in the estimates and rarely lead to globally optimized parameter values. In this article, a robust methodology to estimate parameters for biocatalytic reaction kinetic expressions is proposed. The methodology determines the parameters in a systematic manner by exploiting the best features of several of the current approaches. The parameter estimation problem is decomposed into five hierarchical steps, where the solution of each of the steps becomes the input for the subsequent step to achieve the final model with the corresponding regressed parameters. The model is further used for validating its performance and determining the correlation of the parameters. The final model with the fitted parameters is able to describe both initial rate and dynamic experiments. Application of the methodology is illustrated with a case study using the ω-transaminase catalyzed synthesis of 1-phenylethylamine from acetophenone and 2-propylamine.

  19. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.

    PubMed

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error.

  20. On the estimability of geodetic parameters with space-ground and space-space SVLBI observations

    NASA Astrophysics Data System (ADS)

    Wei, Erhu; Liu, Jingnan; Yan, Wei; Shi, Chuang

    2008-12-01

    Space Very Long Baseline Interferometry (SVLBI) is the unique space technique that can directly interconnect the main three reference systems for geodesy and geodynamics. However, the estimable sequence of geodetic parameters including nutation parameters within SVLBI mathematical model has not been determined yet. In this paper, using the mathematical model of space-ground SVLBI observations including the nutation parameters derived by WEI Erhu et al.(2008), the estimable parameter sequence is determined. And the same study is done with space-space SVLBI Observations. To study the standard deviation of nutation parameters estimated with space-ground SVLBI observations, the model of variance propagation is derived, with which some numerical tests are done. Finally, the results are present.

  1. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  2. A method for estimating both the solubility parameters and molar volumes of liquids

    NASA Technical Reports Server (NTRS)

    Fedors, R. F.

    1974-01-01

    Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.

  3. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  4. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  5. Recursive sequences in first-year calculus

    NASA Astrophysics Data System (ADS)

    Krainer, Thomas

    2016-02-01

    This article provides ready-to-use supplementary material on recursive sequences for a second-semester calculus class. It equips first-year calculus students with a basic methodical procedure based on which they can conduct a rigorous convergence or divergence analysis of many simple recursive sequences on their own without the need to invoke inductive arguments as is typically required in calculus textbooks. The sequences that are accessible to this kind of analysis are predominantly (eventually) monotonic, but also certain recursive sequences that alternate around their limit point as they converge can be considered.

  6. A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.

    ERIC Educational Resources Information Center

    Newman, Isadore; And Others

    1979-01-01

    A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)

  7. Catchment tomography - An approach for spatial parameter estimation in catchment hydrology

    NASA Astrophysics Data System (ADS)

    Walther, Dorina; Kurtz, Wolfgang; Hendricks-Franssen, Harrie-Jan; Kollet, Stefan

    2016-04-01

    Though forecast accuracy of hydrological models has improved in the last decades due to the development of more powerful and distributed models, uncertainties in forcings and model parameters are still challenging issues limiting the forecast reliability. As the number of unknown model parameters is generally large for distributed models, batch calibration methods usually lead to different parameter sets resulting in the same model accuracy. Catchment tomography presents an approach to reduce this non-uniqueness problem in hydrological parameter estimation by applying a moving transmitter-receiver concept on a catchment. Radar based precipitation fields serve as the transmitters and stream water gauge observations, the receivers, are sequentially assimilated into the model. The integrated stream gauge signals are resolved by a joint state-parameter update with the Ensemble Kalman Filter. The uncertain parameters are continuously constrained by sequentially integrating new information. Forward simulations are performed with the variably saturated subsurface and overland flow model ParFlow, which has been coupled to the Parallel Data Assimilation Framework (PDAF). In a first step in developing the method, catchment tomography was applied in a synthetic study of a simplified two dimensional catchment with pure overland flow (no subsurface flow) to estimate the spatially distributed Manning's roughness coefficient. The roughness coefficient was distributed in two and four zones and was updated applying different real radar precipitation time series and different initial parameter distributions. The parameters were successfully estimated with only 64 realizations over a simulation period of 30 days with hourly state and parameter updates. The error in the ensemble mean estimated parameters was reduced from up to 500% to less than 4% for all zones of both scenarios, independent from the initial ensemble mean value, if an appropriate initial ensemble spread was applied

  8. Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Dai, Xiao-Xia; Feng, Yuan

    2015-12-01

    When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).

  9. A hierarchical Bayesian approach for estimation of photosynthetic parameters of C(3) plants.

    PubMed

    Patrick, Lisa D; Ogle, Kiona; Tissue, David T

    2009-12-01

    We describe a hierarchical Bayesian (HB) approach to fitting the Farquhar et al.model of photosynthesis to leaf gas exchange data. We illustrate the utility of this approach for estimating photosynthetic parameters using data from desert shrubs. Unique to the HB method is its ability to simultaneously estimate plant- and species-level parameters, adjust for peaked or non-peaked temperature dependence of parameters, explicitly estimate the 'critical' intracellular [CO(2)] marking the transition between ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) and ribulose-1,5-bisphosphate (RuBP) limitations, and use both light response and CO(2) response curve data to better inform parameter estimates. The model successfully predicted observed photosynthesis and yielded estimates of photosynthetic parameters and their uncertainty. The model with peaked temperature responses fit the data best, and inclusion of light response data improved estimates for day respiration (R(d)). Species differed in R(d25) (R(d) at 25 degrees C), maximum rate of electron transport (J(max25)), a Michaelis-Menten constant (K(c25)) and a temperature dependence parameter (DeltaS). Such differences could potentially reflect differential physiological adaptations to environmental variation. Plants differed in R(d25), J(max25), mesophyll conductance (g(m25)) and maximum rate of Rubisco carboxylation (V(cmax25)). These results suggest that plant- and species-level variation should be accounted for when applying the Farquhar et al. model in an inferential or predictive framework.

  10. Subsonic flight test evaluation of a propulsion system parameter estimation process for the F100 engine

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1992-01-01

    Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.

  11. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    SciTech Connect

    Wagener, T; Hogue, T; Schaake, J; Duan, Q; Gupta, H; Andreassian, V; Hall, A; Leavesley, G

    2006-05-08

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modelers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community and briefly states future directions.

  12. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model.

    PubMed

    Jiang, Shengyu; Wang, Chun; Weiss, David J

    2016-01-01

    Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916

  13. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  14. Evaluation of Structural Equation Mixture Models: Parameter Estimates and Correct Class Assignment

    ERIC Educational Resources Information Center

    Tueller, Stephen; Lubke, Gitta

    2010-01-01

    Structural equation mixture models (SEMMs) are latent class models that permit the estimation of a structural equation model within each class. Fitting SEMMs is illustrated using data from 1 wave of the Notre Dame Longitudinal Study of Aging. Based on the model used in the illustration, SEMM parameter estimation and correct class assignment are…

  15. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  16. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  17. ON ASYMPTOTIC DISTRIBUTION AND ASYMPTOTIC EFFICIENCY OF LEAST SQUARES ESTIMATORS OF SPATIAL VARIOGRAM PARAMETERS. (R827257)

    EPA Science Inventory

    Abstract

    In this article, we consider the least-squares approach for estimating parameters of a spatial variogram and establish consistency and asymptotic normality of these estimators under general conditions. Large-sample distributions are also established under a sp...

  18. Estimation of Kalman filter model parameters from an ensemble of tests

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Haley, D. R.; Levine, W.; Porter, D. W.; Vahlberg, C. J.

    1980-01-01

    A methodology for estimating initial mean and covariance parameters in a Kalman filter model from an ensemble of nonidentical tests is presented. In addition, the problem of estimating time constants and process noise levels is addressed. Practical problems such as developing and validating inertial instrument error models from laboratory test data or developing error models of individual phases of a test are generally considered.

  19. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    EPA Science Inventory

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  20. Estimating Stellar Fundamental Parameters Using PCA: Application to Early Type Stars of GES Data

    NASA Astrophysics Data System (ADS)

    Farah, W.; Gebran, M.; Paletou, F.; Blomme, R.

    2015-12-01

    This work addresses a procedure to estimate fundamental stellar parameters such as T_{eff}, log g, [Fe/H], and v sin i using a dimensionality reduction technique called principal component analysis (PCA), applied to a large database of synthetic spectra. This technique shows promising results for inverting stellar parameters of observed targets from Gaia Eso Survey.

  1. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  2. A novel cost function to estimate parameters of oscillatory biochemical systems

    PubMed Central

    2012-01-01

    Oscillatory pathways are among the most important classes of biochemical systems with examples ranging from circadian rhythms and cell cycle maintenance. Mathematical modeling of these highly interconnected biochemical networks is needed to meet numerous objectives such as investigating, predicting and controlling the dynamics of these systems. Identifying the kinetic rate parameters is essential for fully modeling these and other biological processes. These kinetic parameters, however, are not usually available from measurements and most of them have to be estimated by parameter fitting techniques. One of the issues with estimating kinetic parameters in oscillatory systems is the irregularities in the least square (LS) cost function surface used to estimate these parameters, which is caused by the periodicity of the measurements. These irregularities result in numerous local minima, which limit the performance of even some of the most robust global optimization algorithms. We proposed a parameter estimation framework to address these issues that integrates temporal information with periodic information embedded in the measurements used to estimate these parameters. This periodic information is used to build a proposed cost function with better surface properties leading to fewer local minima and better performance of global optimization algorithms. We verified for three oscillatory biochemical systems that our proposed cost function results in an increased ability to estimate accurate kinetic parameters as compared to the traditional LS cost function. We combine this cost function with an improved noise removal approach that leverages periodic characteristics embedded in the measurements to effectively reduce noise. The results provide strong evidence on the efficacy of this noise removal approach over the previous commonly used wavelet hard-thresholding noise removal methods. This proposed optimization framework results in more accurate kinetic parameters that

  3. Recursive retrospective revaluation of causal judgments.

    PubMed

    Macho, Siegfried; Burkart, Judith

    2002-11-01

    Recursive causal evaluation is an iterative process in which the evaluation of a target cause, T, is based on the outcome of the evaluation of another cause, C, the evaluation of which itself depends on the evaluation of a 3rd cause, D. Retrospective revaluation consists of backward processing of information as indicated by the fact that the evaluation of T is influenced by subsequent information that is not concerned with T directly. Two experiments demonstrate recursive retrospective revaluation with contingency information presented in list format as well as with trial-by-trial acquisition. Existing associative models are unable to predict the results. The model of recursive causal disambiguation that conceptualizes the revaluation as a recursive process of disambiguation predicts the pattern of results correctly.

  4. Estimation of dispersion parameters from photographic density measurements on smoke puffs

    NASA Astrophysics Data System (ADS)

    Yassky, D.

    An extension is proposed of methods that use "optical boundaries" of smoke-plumes in order to estimate atmospheric dispersion parameters. Use is made here of some properties of photographic optics and concentration distributions of light absorbing puffs having no multiple scattering. An array of relative photometric densities, measured on a single photograph of a puff, is shown to be of use in numerical estimation of a puff's dispersive parameters. The proposed method's performance is evaluated by means of computer simulation which includes estimates of the influence of photogrammetric and photometric errors. Future experimental validation of the proposed method may introduce fast and inexpensive ways of obtaining extensive atmospheric dispersion data bases.

  5. Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.

    PubMed

    Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian

    2016-01-20

    This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.

  6. Method for implementation of recursive hierarchical segmentation on parallel computers

    NASA Technical Reports Server (NTRS)

    Tilton, James C. (Inventor)

    2005-01-01

    A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

  7. An implementation of continuous genetic algorithm in parameter estimation of predator-prey model

    NASA Astrophysics Data System (ADS)

    Windarto

    2016-03-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.

  8. Parameter estimation scheme for fault detection and identification in dynamic systems

    SciTech Connect

    Dinca, L.; Aldemir, T.

    1996-12-31

    While several parameter estimation techniques have been proposed for fault detection and identification in dynamic systems, some difficulties in their implementation arise from (a) use of linear models to describe plant dynamics in a wide range of operating conditions, (b) accommodating noisy input data or random changes in system parameters, and (c) the need for extensive computational effort. This paper describes a parameter estimation technique that can alleviate these problems. The approach is described and illustrated using a third-order system describing temporal xenon oscillations.

  9. Is recursion language-specific? Evidence of recursive mechanisms in the structure of intentional action.

    PubMed

    Vicari, Giuseppe; Adenzato, Mauro

    2014-05-01

    In their 2002 seminal paper Hauser, Chomsky and Fitch hypothesize that recursion is the only human-specific and language-specific mechanism of the faculty of language. While debate focused primarily on the meaning of recursion in the hypothesis and on the human-specific and syntax-specific character of recursion, the present work focuses on the claim that recursion is language-specific. We argue that there are recursive structures in the domain of motor intentionality by way of extending John R. Searle's analysis of intentional action. We then discuss evidence from cognitive science and neuroscience supporting the claim that motor-intentional recursion is language-independent and suggest some explanatory hypotheses: (1) linguistic recursion is embodied in sensory-motor processing; (2) linguistic and motor-intentional recursions are distinct and mutually independent mechanisms. Finally, we propose some reflections about the epistemic status of HCF as presenting an empirically falsifiable hypothesis, and on the possibility of testing recursion in different cognitive domains. PMID:24762973

  10. Is recursion language-specific? Evidence of recursive mechanisms in the structure of intentional action.

    PubMed

    Vicari, Giuseppe; Adenzato, Mauro

    2014-05-01

    In their 2002 seminal paper Hauser, Chomsky and Fitch hypothesize that recursion is the only human-specific and language-specific mechanism of the faculty of language. While debate focused primarily on the meaning of recursion in the hypothesis and on the human-specific and syntax-specific character of recursion, the present work focuses on the claim that recursion is language-specific. We argue that there are recursive structures in the domain of motor intentionality by way of extending John R. Searle's analysis of intentional action. We then discuss evidence from cognitive science and neuroscience supporting the claim that motor-intentional recursion is language-independent and suggest some explanatory hypotheses: (1) linguistic recursion is embodied in sensory-motor processing; (2) linguistic and motor-intentional recursions are distinct and mutually independent mechanisms. Finally, we propose some reflections about the epistemic status of HCF as presenting an empirically falsifiable hypothesis, and on the possibility of testing recursion in different cognitive domains.

  11. Sparsity Constrained Mixture Modeling for the Estimation of Kinetic Parameters in Dynamic PET

    PubMed Central

    Lin, Yanguang; Haldar, Justin P.; Li, Quanzheng; Conti, Peter S.; Leahy, Richard M.

    2013-01-01

    The estimation and analysis of kinetic parameters in dynamic PET is frequently confounded by tissue heterogeneity and partial volume effects. We propose a new constrained model of dynamic PET to address these limitations. The proposed formulation incorporates an explicit mixture model in which each image voxel is represented as a mixture of different pure tissue types with distinct temporal dynamics. We use Cramér-Rao lower bounds to demonstrate that the use of prior information is important to stabilize parameter estimation with this model. As a result, we propose a constrained formulation of the estimation problem that we solve using a two-stage algorithm. In the first stage, a sparse signal processing method is applied to estimate the rate parameters for the different tissue compartments from the noisy PET time series. In the second stage, tissue fractions and the linear parameters of different time activity curves are estimated using a combination of spatial-regularity and fractional mixture constraints. A block coordinate descent algorithm is combined with a manifold search to robustly estimate these parameters. The method is evaluated with both simulated and experimental dynamic PET data. PMID:24216681

  12. Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Weaver, Aaron S.

    2003-01-01

    Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.

  13. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  14. Variational Bayesian framework for estimating parameters of integrated E/MEG and fMRI model

    NASA Astrophysics Data System (ADS)

    Babajani-Feremi, Abbas; Bowyer, Susan; Moran, John; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2009-02-01

    The integrated analysis of the Electroencephalography (EEG), Magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) are instrumental for functional neuroimaging of the brain. A bottom-up integrated E/MEG and fMRI model based on physiology as well as a method for estimating its parameters are keys to the integrated analysis. We propose the variational Bayesian expectation maximization (VBEM) method to estimate parameters of our proposed integrated model. VBEM method iteratively optimizes a lower bound on the marginal likelihood. An iteration of the VBEM consists of two steps: a variational Bayesian expectation step implemented using the extended Kalman smoother (EKS) and the posterior probability of the parameters in the previous step, and a variational Bayesian maximization step to estimate the posterior distributions of the parameters. For a given external stimulus, a variety of multi-area models can be considered in which the number of areas and the configuration and strength of connections between the areas are different. The proposed VBEM method can be used to select an optimal model as well as estimate its parameters. The efficiency of the proposed VBEM method is illustrated using simulation and real datasets. The proposed VBEM method can be used to estimate parameters of other non-linear dynamical systems. This study proposes an effective method to integrate E/MEG and fMRI and plans to use these techniques in functional neuroimaging.

  15. Parameter estimation and control for a neural mass model based on the unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Liu, Xian; Gao, Qing

    2013-10-01

    Recent progress in Kalman filters to estimate states and parameters in nonlinear systems has provided the possibility of applying such approaches to neural systems. We here apply the nonlinear method of unscented Kalman filters (UKFs) to observe states and estimate parameters in a neural mass model that can simulate distinct rhythms in electroencephalography (EEG) including dynamical evolution during epilepsy seizures. We demonstrate the efficiency of the UKF in estimating states and parameters. We also develop an UKF-based control strategy to modulate the dynamics of the neural mass model. In this strategy the UKF plays the role of observing states, and the control law is constructed via the estimated states. We demonstrate the feasibility of using such a strategy to suppress epileptiform spikes in the neural mass model.

  16. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  17. Parameter Estimation of a Ground Moving Target Using Image Sharpness Optimization.

    PubMed

    Yu, Jing; Li, Yaan

    2016-06-30

    Motion parameter estimation of a ground moving target is an important issue in synthetic aperture radar ground moving target indication (SAR-GMTI) which has significant applications for civilian and military. The SAR image of a moving target may be displaced and defocused due to the radial and along-track velocity components, respectively. The sharpness cost function presents a measure of the degree of focus of the image. In this work, a new ground moving target parameter estimation algorithm based on the sharpness optimization criterion is proposed. The relationships between the quadratic phase errors and the target's velocity components are derived. Using two-dimensional searching of the sharpness cost function, we can obtain the velocity components of the target and the focused target image simultaneously. The proposed moving target parameter estimation method and image sharpness metrics are analyzed in detail. Finally, numerical results illustrate the effective and superior velocity estimation performance of the proposed method when compared to existing algorithms.

  18. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.

  19. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.

  20. Non-linear Parameter Estimates from Non-stationary MEG Data

    PubMed Central

    Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth

    2016-01-01

    We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815