Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects
2017-02-22
AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14-1-0183 5c. PROGRAM
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
Performance in population models for count data, part II: a new SAEM algorithm
Savic, Radojka; Lavielle, Marc
2009-01-01
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795
Bridging the gaps between non-invasive genetic sampling and population parameter estimation
Francesca Marucco; Luigi Boitani; Daniel H. Pletscher; Michael K. Schwartz
2011-01-01
Reliable estimates of population parameters are necessary for effective management and conservation actions. The use of genetic data for captureÂrecapture (CR) analyses has become an important tool to estimate population parameters for elusive species. Strong emphasis has been placed on the genetic analysis of non-invasive samples, or on the CR analysis; however,...
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
ERIC Educational Resources Information Center
Marcoulides, Katerina M.
2018-01-01
This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Veronese, Mattia; Schmidt, Kathleen C; Smith, Carolyn Beebe; Bertoldo, Alessandra
2012-06-01
A spectral analysis approach was used to estimate kinetic parameters of the L-[1-(11)C]leucine positron emission tomography (PET) method and regional rates of cerebral protein synthesis (rCPS) on a voxel-by-voxel basis. Spectral analysis applies to both heterogeneous and homogeneous tissues; it does not require prior assumptions concerning number of tissue compartments. Parameters estimated with spectral analysis can be strongly affected by noise, but numerical filters improve estimation performance. Spectral analysis with iterative filter (SAIF) was originally developed to improve estimation of leucine kinetic parameters and rCPS in region-of-interest (ROI) data analyses. In the present study, we optimized SAIF for application at the voxel level. In measured L-[1-(11)C]leucine PET data, voxel-level SAIF parameter estimates averaged over all voxels within a ROI (mean voxel-SAIF) generally agreed well with corresponding estimates derived by applying the originally developed SAIF to ROI time-activity curves (ROI-SAIF). Region-of-interest-SAIF and mean voxel-SAIF estimates of rCPS were highly correlated. Simulations showed that mean voxel-SAIF rCPS estimates were less biased and less variable than ROI-SAIF estimates in the whole brain and cortex; biases were similar in white matter. We conclude that estimation of rCPS with SAIF is improved when the method is applied at voxel level than in ROI analysis.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Ito, Hiroshi; Ikoma, Yoko; Seki, Chie; Kimura, Yasuyuki; Kawaguchi, Hiroshi; Takuwa, Hiroyuki; Ichise, Masanori; Suhara, Tetsuya; Kanno, Iwao
2017-05-01
Objectives In PET studies for neuroreceptors, tracer kinetics are described by the two-tissue compartment model (2-TCM), and binding parameters, including the total distribution volume (V T ), non-displaceable distribution volume (V ND ), and binding potential (BP ND ), can be determined from model parameters estimated by kinetic analysis. The stability of binding parameter estimates depends on the kinetic characteristics of radioligands. To describe these kinetic characteristics, we previously developed a two-phase graphic plot analysis in which V ND and V T can be estimated from the x-intercept of regression lines for early and delayed phases, respectively. In this study, we applied this graphic plot analysis to visual evaluation of the kinetic characteristics of radioligands for neuroreceptors, and investigated a relationship between the shape of these graphic plots and the stability of binding parameters estimated by the kinetic analysis with 2-TCM in simulated brain tissue time-activity curves (TACs) with various binding parameters. Methods 90-min TACs were generated with the arterial input function and assumed kinetic parameters according to 2-TCM. Graphic plot analysis was applied to these simulated TACs, and the curvature of the plot for each TAC was evaluated visually. TACs with several noise levels were also generated with various kinetic parameters, and the bias and variation of binding parameters estimated by kinetic analysis were calculated in each TAC. These bias and variation were compared with the shape of graphic plots. Results The graphic plots showed larger curvature for TACs with higher specific binding and slower dissociation of specific binding. The quartile deviations of V ND and BP ND determined by kinetic analysis were smaller for radioligands with slow dissociation. Conclusions The larger curvature of graphic plots for radioligands with slow dissociation might indicate a stable determination of V ND and BP ND by kinetic analysis. For investigation of the kinetics of radioligands, such kinetic characteristics should be considered.
Šimůnek, Jirka; Nimmo, John R.
2005-01-01
A modified version of the Hydrus software package that can directly or inversely simulate water flow in a transient centrifugal field is presented. The inverse solver for parameter estimation of the soil hydraulic parameters is then applied to multirotation transient flow experiments in a centrifuge. Using time‐variable water contents measured at a sequence of several rotation speeds, soil hydraulic properties were successfully estimated by numerical inversion of transient experiments. The inverse method was then evaluated by comparing estimated soil hydraulic properties with those determined independently using an equilibrium analysis. The optimized soil hydraulic properties compared well with those determined using equilibrium analysis and steady state experiment. Multirotation experiments in a centrifuge not only offer significant time savings by accelerating time but also provide significantly more information for the parameter estimation procedure compared to multistep outflow experiments in a gravitational field.
Multivariate meta-analysis with an increasing number of parameters
Boca, Simina M.; Pfeiffer, Ruth M.; Sampson, Joshua N.
2017-01-01
Summary Meta-analysis can average estimates of multiple parameters, such as a treatment’s effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between study variability, the loss of efficiency due to choosing random effects MVMA over fixed-effect MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for Non-Hodgkin Lymphoma. PMID:28195655
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath
NASA Astrophysics Data System (ADS)
Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping
2018-04-01
Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.
Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C
2004-07-01
A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
Sensitivity analysis of add-on price estimate for select silicon wafering technologies
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1982-01-01
The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.
Multivariate meta-analysis with an increasing number of parameters.
Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N
2017-05-01
Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma. © Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Rui; Huang, Zhenyu; Wang, Shaobu
2015-07-30
With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKFmore » method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.« less
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-01-01
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process
NASA Technical Reports Server (NTRS)
Mokashi, A. R.; Kachare, A. H.
1981-01-01
The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.
NASA Astrophysics Data System (ADS)
Adha, Kurniawan; Yusoff, Wan Ismail Wan; Almanna Lubis, Luluan
2017-10-01
Determining the pore pressure data and overpressure zone is a compulsory part of oil and gas exploration in which the data can enhance the safety with profit and preventing the drilling hazards. Investigation of thermophysical parameters such as temperature and thermal conductivity can enhance the pore pressure estimation for overpressure mechanism determination. Since those parameters are dependent on rock properties, it may reflect the changes on the column of thermophysical parameters when there is abnormally in pore pressure. The study was conducted in “MRI 1” well offshore Sarawak, where a new approach method designed to determine the overpressure generation. The study was insisted the contribution of thermophysical parameters for supporting the velocity analysis method, petrophysical analysis were done in these studies. Four thermal facies were identified along the well. The overpressure developed below the thermal facies 4, where the pressure reached 38 Mpa and temperature was increasing significantly. The velocity and the thermal conductivity cross plots shows a linear relationship since the both parameters mainly are the function of the rock compaction. When the rock more compact, the particles were brought closer into contact and making the sound wave going faster while the thermal conductivity were increasing. In addition, the increment of temperature and high heat flow indicated the presence of fluid expansion mechanism. Since the shale sonic velocity and density analysis were the common methods in overpressure mechanism and pore pressure estimation. As the addition parameters for determining overpressure zone, the presence of thermophysical analysis was enhancing the current method, where the current method was the single function of velocity analysis. The presence of thermophysical analysis will improve the understanding in overpressure mechanism determination as the new input parameters. Thus, integrated of thermophysical technique and velocity analysis are important parameters in investigating the overpressure mechanisms and pore pressure estimation during oil and gas exploitation in the future.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.
ERIC Educational Resources Information Center
Ethington, Corinna A.
1987-01-01
This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
Informed spectral analysis: audio signal parameter estimation using side information
NASA Astrophysics Data System (ADS)
Fourer, Dominique; Marchand, Sylvain
2013-12-01
Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.
Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, aremore » related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.« less
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Anderson-Cook, Christine Michaela
2017-03-01
Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
2016-07-04
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...
2016-06-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less
NASA Astrophysics Data System (ADS)
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura
2016-07-01
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M H; Babron, M C; Clerget-Darpoux, F
1996-06-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Parameter estimation accuracies of Galactic binaries with eLISA
NASA Astrophysics Data System (ADS)
Błaut, Arkadiusz
2018-09-01
We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.
Estimating the Geocenter from GNSS Observations
NASA Astrophysics Data System (ADS)
Dach, Rolf; Michael, Meindl; Beutler, Gerhard; Schaer, Stefan; Lutz, Simon; Jäggi, Adrian
2014-05-01
The satellites of the Global Navigation Satellite Systems (GNSS) are orbiting the Earth according to the laws of celestial mechanics. As a consequence, the satellites are sensitive to the coordinates of the center of mass of the Earth. The coordinates of the (ground) tracking stations are referring to the center of figure as the conventional origin of the reference frame. The difference between the center of mass and center of figure is the instantaneous geocenter. Following this definition the global GNSS solutions are sensitive to the geocenter. Several studies demonstrated strong correlations of the GNSS-derived geocenter coordinates with parameters intended to absorb radiation pressure effects acting on the GNSS satellites, and with GNSS satellite clock parameters. One should thus pose the question to what extent these satellite-related parameters absorb (or hide) the geocenter information. A clean simulation study has been performed to answer this question. The simulation environment allows it in particular to introduce user-defined shifts of the geocenter (systematic inconsistencies between the satellite's and station's reference frames). These geocenter shifts may be recovered by the mentioned parameters - provided they were set up in the analysis. If the geocenter coordinates are not estimated, one may find out which other parameters absorb the user-defined shifts of the geocenter and to what extent. Furthermore, the simulation environment also allows it to extract the correlation matrix from the a posteriori covariance matrix to study the correlations between different parameter types of the GNSS analysis system. Our results show high degrees of correlations between geocenter coordinates, orbit-related parameters, and satellite clock parameters. These correlations are of the same order of magnitude as the correlations between station heights, troposphere, and receiver clock parameters in each regional or global GNSS network analysis. If such correlations are accepted in a GNSS analysis when estimating station coordinates, geocenter coordinates must be considered as mathematically estimable in a global GNSS analysis. The geophysical interpretation may of course become difficult, e.g., if insufficient orbit models are used.
On the robustness of a Bayes estimate. [in reliability theory
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia
NASA Astrophysics Data System (ADS)
Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica
2017-01-01
We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.
Tsubakita, Takashi; Shimazaki, Kazuyo; Ito, Hiroshi; Kawazoe, Nobuo
2017-10-30
The Utrecht Work Engagement Scale for Students has been used internationally to assess students' academic engagement, but it has not been analyzed via item response theory. The purpose of this study was to conduct an item response theory analysis of the Japanese version of the Utrecht Work Engagement Scale for Students translated by authors. Using a two-parameter model and Samejima's graded response model, difficulty and discrimination parameters were estimated after confirming the factor structure of the scale. The 14 items on the scale were analyzed with a sample of 3214 university and college students majoring medical science, nursing, or natural science in Japan. The preliminary parameter estimation was conducted with the two parameter model, and indicated that three items should be removed because there were outlier parameters. Final parameter estimation was conducted using the survived 11 items, and indicated that all difficulty and discrimination parameters were acceptable. The test information curve suggested that the scale better assesses higher engagement than average engagement. The estimated parameters provide a basis for future comparative studies. The results also suggested that a 7-point Likert scale is too broad; thus, the scaling should be modified to fewer graded scaling structure.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Estimation and confidence intervals for empirical mixing distributions
Link, W.A.; Sauer, J.R.
1995-01-01
Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.
Filter parameter tuning analysis for operational orbit determination support
NASA Technical Reports Server (NTRS)
Dunham, J.; Cox, C.; Niklewski, D.; Mistretta, G.; Hart, R.
1994-01-01
The use of an extended Kalman filter (EKF) for operational orbit determination support is being considered by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). To support that investigation, analysis was performed to determine how an EKF can be tuned for operational support of a set of earth-orbiting spacecraft. The objectives of this analysis were to design and test a general purpose scheme for filter tuning, evaluate the solution accuracies, and develop practical methods to test the consistency of the EKF solutions in an operational environment. The filter was found to be easily tuned to produce estimates that were consistent, agreed with results from batch estimation, and compared well among the common parameters estimated for several spacecraft. The analysis indicates that there is not a sharply defined 'best' tunable parameter set, especially when considering only the position estimates over the data arc. The comparison of the EKF estimates for the user spacecraft showed that the filter is capable of high-accuracy results and can easily meet the current accuracy requirements for the spacecraft included in the investigation. The conclusion is that the EKF is a viable option for FDD operational support.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M. H.; Babron, M. C.; Clerget-Darpoux, F.
1996-01-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. PMID:8651311
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.
Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil
2014-08-20
In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
Koeppe, R A; Holthoff, V A; Frey, K A; Kilbourn, M R; Kuhl, D E
1991-09-01
The in vivo kinetic behavior of [11C]flumazenil ([11C]FMZ), a non-subtype-specific central benzodiazepine antagonist, is characterized using compartmental analysis with the aim of producing an optimized data acquisition protocol and tracer kinetic model configuration for the assessment of [11C]FMZ binding to benzodiazepine receptors (BZRs) in human brain. The approach presented is simple, requiring only a single radioligand injection. Dynamic positron emission tomography data were acquired on 18 normal volunteers using a 60- to 90-min sequence of scans and were analyzed with model configurations that included a three-compartment, four-parameter model, a three-compartment, three-parameter model, with a fixed value for free plus nonspecific binding; and a two-compartment, two-parameter model. Statistical analysis indicated that a four-parameter model did not yield significantly better fits than a three-parameter model. Goodness of fit was improved for three- versus two-parameter configurations in regions with low receptor density, but not in regions with moderate to high receptor density. Thus, a two-compartment, two-parameter configuration was found to adequately describe the kinetic behavior of [11C]FMZ in human brain, with stable estimates of the model parameters obtainable from as little as 20-30 min of data. Pixel-by-pixel analysis yields functional images of transport rate (K1) and ligand distribution volume (DV"), and thus provides independent estimates of ligand delivery and BZR binding.
Analysis of pumping tests: Significance of well diameter, partial penetration, and noise
Heidari, M.; Ghiassi, K.; Mehnert, E.
1999-01-01
The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.
Orbit/attitude estimation with LANDSAT Landmark data
NASA Technical Reports Server (NTRS)
Hall, D. L.; Waligora, S.
1979-01-01
The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten
2018-01-01
Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
NASA Astrophysics Data System (ADS)
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
An audit of the statistics and the comparison with the parameter in the population
NASA Astrophysics Data System (ADS)
Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad
2015-10-01
The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
Validation of Bayesian analysis of compartmental kinetic models in medical imaging.
Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M
2016-10-01
Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
NASA Astrophysics Data System (ADS)
Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.
2017-06-01
In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
Yoshida, Keiichiro; Nishidate, Izumi; Ishizuka, Tomohiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2015-05-01
In order to estimate multispectral images of the absorption and scattering properties in the cerebral cortex of in vivo rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital RGB camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentrations of oxygenated hemoglobin and that of deoxygenated hemoglobin were estimated as the absorption parameters, whereas the coefficient a and the exponent b of the reduced scattering coefficient spectrum approximated by a power law function were estimated as the scattering parameters. The spectra of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters, and the spectral images of absorption and reduced scattering coefficients were then estimated. In order to confirm the feasibility of this method, we performed in vivo experiments on exposed rat brain. The estimated images of the absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. The changes in the estimated absorption and scattering parameters during normoxia, hyperoxia, and anoxia indicate the potential applicability of the method by which to evaluate the pathophysiological conditions of in vivo brain due to the loss of tissue viability.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
The Relationship Between School Holidays and Transmission of Influenza in England and Wales
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-01-01
Abstract School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967–2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5–14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. PMID:27744384
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Technical Reports Server (NTRS)
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi
2015-01-01
The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.
Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z
2017-04-01
We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Futko, S. I.; Ermolaeva, E. M.; Dobrego, K. V.; Bondarenko, V. P.; Dolgii, L. N.
2012-07-01
We have developed a sensitivity analysis permitting effective estimation of the change in the impulse responses of a microthrusters and in the ignition characteristics of the solid-fuel charge caused by the variation of the basic macrokinetic parameters of the mixed fuel and the design parameters of the microthruster's combustion chamber. On the basis of the proposed sensitivity analysis, we have estimated the spread of both the propulsive force and impulse and the induction period and self-ignition temperature depending on the macrokinetic parameters of combustion (pre-exponential factor, activation energy, density, and heat content) of the solid-fuel charge of the microthruster. The obtained results can be used for rapid and effective estimation of the spread of goal functions to provide stable physicochemical characteristics and impulse responses of solid-fuel mixtures in making and using microthrusters.
NASA Astrophysics Data System (ADS)
Harpold, R. E.; Urban, T. J.; Schutz, B. E.
2008-12-01
Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
NASA Astrophysics Data System (ADS)
Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra
2015-12-01
In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.
Application of parameter estimation to aircraft stability and control: The output-error approach
NASA Technical Reports Server (NTRS)
Maine, Richard E.; Iliff, Kenneth W.
1986-01-01
The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
Inverse modeling with RZWQM2 to predict water quality
Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.
2011-01-01
This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.
Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.
2015-01-01
While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
Bayesian Estimation in the One-Parameter Latent Trait Model.
1980-03-01
Journal of Mathematical and Statistical Psychology , 1973, 26, 31-44. (a) Andersen, E. B. A goodness of fit test for the Rasch model. Psychometrika, 1973, 28...technique for estimating latent trait mental test parameters. Educational and Psychological Measurement, 1976, 36, 705-715. Lindley, D. V. The...Lord, F. M. An analysis of verbal Scholastic Aptitude Test using Birnbaum’s three-parameter logistic model. Educational and Psychological
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
NASA Astrophysics Data System (ADS)
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash
2017-03-01
A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.
Automated inference procedure for the determination of cell growth parameters
NASA Astrophysics Data System (ADS)
Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.
2016-01-01
The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.
a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters
NASA Astrophysics Data System (ADS)
Huang, Lin-Shan; Chen, Yan-Guang
Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
DuVal, Ashley; Gezan, Salvador A.; Mustiga, Guiliana; Stack, Conrad; Marelli, Jean-Philippe; Chaparro, José; Livingstone, Donald; Royaert, Stefan; Motamayor, Juan C.
2017-01-01
Breeding programs of cacao (Theobroma cacao L.) trees share the many challenges of breeding long-living perennial crops, and genetic progress is further constrained by both the limited understanding of the inheritance of complex traits and the prevalence of technical issues, such as mislabeled individuals (off-types). To better understand the genetic architecture of cacao, in this study, 13 years of phenotypic data collected from four progeny trials in Bahia, Brazil were analyzed jointly in a multisite analysis. Three separate analyses (multisite, single site with and without off-types) were performed to estimate genetic parameters from statistical models fitted on nine important agronomic traits (yield, seed index, pod index, % healthy pods, % pods infected with witches broom, % of pods other loss, vegetative brooms, diameter, and tree height). Genetic parameters were estimated along with variance components and heritabilities from the multisite analysis, and a trial was fingerprinted with low-density SNP markers to determine the impact of off-types on estimations. Heritabilities ranged from 0.37 to 0.64 for yield and its components and from 0.03 to 0.16 for disease resistance traits. A weighted index was used to make selections for clonal evaluation, and breeding values estimated for the parental selection and estimation of genetic gain. The impact of off-types to breeding progress in cacao was assessed for the first time. Even when present at <5% of the total population, off-types altered selections by 48%, and impacted heritability estimations for all nine of the traits analyzed, including a 41% difference in estimated heritability for yield. These results show that in a mixed model analysis, even a low level of pedigree error can significantly alter estimations of genetic parameters and selections in a breeding program. PMID:29250097
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
Wagner, Brian J.; Harvey, Judson W.
1997-01-01
Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.
Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars
NASA Astrophysics Data System (ADS)
Martínez Ledesma, M.; Diaz, M. A.
2017-12-01
The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.
Applying spectral data analysis techniques to aquifer monitoring data in Belvoir Ranch, Wyoming
NASA Astrophysics Data System (ADS)
Gao, F.; He, S.; Zhang, Y.
2017-12-01
This study uses spectral data analysis techniques to estimate the hydraulic parameters from water level fluctuation due to tide effect and barometric effect. All water level data used in this study are collected in Belvoir Ranch, Wyoming. Tide effect can be not only observed in coastal areas, but also in inland confined aquifers. The force caused by changing positions of sun and moon affects not only ocean but also solid earth. The tide effect has an oscillatory pumping or injection sequence to the aquifer, and can be observed from dense water level monitoring. Belvoir Ranch data are collected once per hour, thus is dense enough to capture the tide effect. First, transforming de-trended data from temporal domain to frequency domain with Fourier transform method. Then, the storage coefficient can be estimated using Bredehoeft-Jacob model. After this, analyze the gain function, which expresses the amplification and attenuation of the output signal, and derive barometric efficiency. Next, find effective porosity with storage coefficient and barometric efficiency with Jacob's model. Finally, estimate aquifer transmissivity and hydraulic conductivity using Paul Hsieh's method. The estimated hydraulic parameters are compared with those from traditional pumping data estimation. This study proves that hydraulic parameter can be estimated by only analyze water level data in frequency domain. It has the advantages of low cost and environmental friendly, thus should be considered for future use of hydraulic parameter estimations.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-01-01
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
The Relationship Between School Holidays and Transmission of Influenza in England and Wales.
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-11-01
School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967-2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5-14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
Sensitivity analysis of the add-on price estimate for the silicon web growth process
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1981-01-01
The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
NASA Astrophysics Data System (ADS)
Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry
2017-04-01
Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
ERIC Educational Resources Information Center
Zhang, Jinming
2004-01-01
It is common to assume during statistical analysis of a multiscale assessment that the assessment has simple structure or that it is composed of several unidimensional subtests. Under this assumption, both the unidimensional and multidimensional approaches can be used to estimate item parameters. This paper theoretically demonstrates that these…
Hydrogen from coal cost estimation guidebook
NASA Technical Reports Server (NTRS)
Billings, R. E.
1981-01-01
In an effort to establish baseline information whereby specific projects can be evaluated, a current set of parameters which are typical of coal gasification applications was developed. Using these parameters a computer model allows researchers to interrelate cost components in a sensitivity analysis. The results make possible an approximate estimation of hydrogen energy economics from coal, under a variety of circumstances.
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
Effects of wing modification on an aircraft's aerodynamic parameters as determined from flight data
NASA Technical Reports Server (NTRS)
Hess, R. A.
1986-01-01
A study of the effects of four wing-leading-edge modifications on a general aviation aircraft's stability and control parameters is presented. Flight data from the basic aircraft configuration and configurations with wing modifications are analyzed to determine each wing geometry's stability and control parameters. The parameter estimates and aerodynamic model forms are obtained using the stepwise regression and maximum likelihood techniques. The resulting parameter estimates and aerodynamic models are verified using vortex-lattice theory and by analysis of each model's ability to predict aircraft behavior. Comparisons of the stability and control derivative estimates from the basic wing and the four leading-edge modifications are accomplished so that the effects of each modification on aircraft stability and control derivatives can be determined.
Optimization of the lithium/thionyl chloride battery
NASA Technical Reports Server (NTRS)
White, Ralph E.
1989-01-01
A 1-D math model for the lithium/thionyl chloride primary cell is used in conjunction with a parameter estimation technique in order to estimate the electro-kinetic parameters of this electrochemical system. The electro-kinetic parameters include the anodic transfer coefficient and exchange current density of the lithium oxidation, alpha sub a,1 and i sub o,i,ref, the cathodic transfer coefficient and the effective exchange current density of the thionyl chloride reduction, alpha sub c,2 and a sup o i sub o,2,ref, and a morphology parameter, Xi. The parameter estimation is performed on simulated data first in order to gain confidence in the method. Data, reported in the literature, for a high rate discharge of an experimental lithium/thionyl chloride cell is used for an analysis.
Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments
Erkoc, Ali; Emiroglu, Esra
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738
Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.
Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.
NASA Astrophysics Data System (ADS)
Unnikrishnan, Madhusudanan; Rajan, Akash; Basanthvihar Raghunathan, Binulal; Kochupillai, Jayaraj
2017-08-01
Experimental modal analysis is the primary tool for obtaining the fundamental dynamic characteristics like natural frequency, mode shape and modal damping ratio that determine the behaviour of any structure under dynamic loading conditions. This paper discusses about a carefully designed experimental method for calculating the dynamic characteristics of a pre-stretched horizontal flexible tube made of polyurethane material. The factors that affect the modal parameter estimation like the application time of shaker excitation, pause time between successive excitation cycles, averaging and windowing of measured signal, as well as the precautions to be taken during the experiment are explained in detail. The modal parameter estimation is done using MEscopeVESTM software. A finite element based pre-stressed modal analysis of the flexible tube is also done using ANSYS ver.14.0 software. The experimental and analytical results agreed well. The proposed experimental methodology may be extended for carrying out the modal analysis of many flexible structures like inflatables, tires and membranes.
NASA Astrophysics Data System (ADS)
Sadegh, Mojtaba; Ragno, Elisa; AghaKouchak, Amir
2017-06-01
We present a newly developed Multivariate Copula Analysis Toolbox (MvCAT) which includes a wide range of copula families with different levels of complexity. MvCAT employs a Bayesian framework with a residual-based Gaussian likelihood function for inferring copula parameters and estimating the underlying uncertainties. The contribution of this paper is threefold: (a) providing a Bayesian framework to approximate the predictive uncertainties of fitted copulas, (b) introducing a hybrid-evolution Markov Chain Monte Carlo (MCMC) approach designed for numerical estimation of the posterior distribution of copula parameters, and (c) enabling the community to explore a wide range of copulas and evaluate them relative to the fitting uncertainties. We show that the commonly used local optimization methods for copula parameter estimation often get trapped in local minima. The proposed method, however, addresses this limitation and improves describing the dependence structure. MvCAT also enables evaluation of uncertainties relative to the length of record, which is fundamental to a wide range of applications such as multivariate frequency analysis.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
NASA Astrophysics Data System (ADS)
Rasa, Ehsan; Foglia, Laura; Mackay, Douglas M.; Scow, Kate M.
2013-11-01
Conservative tracer experiments can provide information useful for characterizing various subsurface transport properties. This study examines the effectiveness of three different types of transport observations for sensitivity analysis and parameter estimation of a three-dimensional site-specific groundwater flow and transport model: conservative tracer breakthrough curves (BTCs), first temporal moments of BTCs ( m 1), and tracer cumulative mass discharge ( M d) through control planes combined with hydraulic head observations ( h). High-resolution data obtained from a 410-day controlled field experiment at Vandenberg Air Force Base, California (USA), have been used. In this experiment, bromide was injected to create two adjacent plumes monitored at six different transects (perpendicular to groundwater flow) with a total of 162 monitoring wells. A total of 133 different observations of transient hydraulic head, 1,158 of BTC concentration, 23 of first moment, and 36 of mass discharge were used for sensitivity analysis and parameter estimation of nine flow and transport parameters. The importance of each group of transport observations in estimating these parameters was evaluated using sensitivity analysis, and five out of nine parameters were calibrated against these data. Results showed the advantages of using temporal moment of conservative tracer BTCs and mass discharge as observations for inverse modeling.
Cellular signaling identifiability analysis: a case study.
Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo
2010-05-21
Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-01-01
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org. PMID:26063822
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
Erguler, Kamil; Stumpf, Michael P H
2011-05-01
The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.
Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures
Moore, Brian R.; Höhna, Sebastian; May, Michael R.; Rannala, Bruce; Huelsenbeck, John P.
2016-01-01
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM. PMID:27512038
Determination of power system component parameters using nonlinear dead beat estimation method
NASA Astrophysics Data System (ADS)
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J
2002-01-01
A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.
A program to form a multidisciplinary data base and analysis for dynamic systems
NASA Technical Reports Server (NTRS)
Taylor, L. W.; Suit, W. T.; Mayo, M. H.
1984-01-01
Diverse sets of experimental data and analysis programs have been assembled for the purpose of facilitating research in systems identification, parameter estimation and state estimation techniques. The data base analysis programs are organized to make it easy to compare alternative approaches. Additional data and alternative forms of analysis will be included as they become available.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Analysis options for estimating status and trends in long-term monitoring
Bart, Jonathan; Beyer, Hawthorne L.
2012-01-01
This chapter describes methods for estimating long-term trends in ecological parameters. Other chapters in this volume discuss more advanced methods for analyzing monitoring data, but these methods may be relatively inaccessible to some readers. Therefore, this chapter provides an introduction to trend analysis for managers and biologists while also discussing general issues relevant to trend assessment in any long-term monitoring program. For simplicity, we focus on temporal trends in population size across years. We refer to the survey results for each year as the “annual means” (e.g. mean per transect, per plot, per time period). The methods apply with little or no modification, however, to formal estimates of population size, other temporal units (e.g. a month), to spatial or other dimensions such as elevation or a north–south gradient, and to other quantities such as chemical or geological parameters. The chapter primarily discusses methods for estimating population-wide parameters rather than studying variation in trend within the population, which can be examined using methods presented in other chapters (e.g. Chapters 7, 12, 20). We begin by reviewing key concepts related to trend analysis. We then describe how to evaluate potential bias in trend estimates. An overview of the statistical models used to quantify trends is then presented. We conclude by showing ways to estimate trends using simple methods that can be implemented with spreadsheets.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
Dynamic modeling of lactic acid fermentation metabolism with Lactococcus lactis.
Oh, Euhlim; Lu, Mingshou; Park, Changhun; Park, Changhun; Oh, Han Bin; Lee, Sang Yup; Lee, Jinwon
2011-02-01
A dynamic model of lactic acid fermentation using Lactococcus lactis was constructed, and a metabolic flux analysis (MFA) and metabolic control analysis (MCA) were performed to reveal an intensive metabolic understanding of lactic acid bacteria (LAB). The parameter estimation was conducted with COPASI software to construct a more accurate metabolic model. The experimental data used in the parameter estimation were obtained from an LC-MS/ MS analysis and time-course simulation study. The MFA results were a reasonable explanation of the experimental data. Through the parameter estimation, the metabolic system of lactic acid bacteria can be thoroughly understood through comparisons with the original parameters. The coefficients derived from the MCA indicated that the reaction rate of L-lactate dehydrogenase was activated by fructose 1,6-bisphosphate and pyruvate, and pyruvate appeared to be a stronger activator of L-lactate dehydrogenase than fructose 1,6-bisphosphate. Additionally, pyruvate acted as an inhibitor to pyruvate kinase and the phosphotransferase system. Glucose 6-phosphate and phosphoenolpyruvate showed activation effects on pyruvate kinase. Hexose transporter was the strongest effector on the flux through L-lactate dehydrogenase. The concentration control coefficient (CCC) showed similar results to the flux control coefficient (FCC).
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
Preference heterogeneity in a count data model of demand for off-highway vehicle recreation
Thomas P Holmes; Jeffrey E Englin
2010-01-01
This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...
Andrew D. Richardson; Mathew Williams; David Y. Hollinger; David J.P. Moore; D. Bryan Dail; Eric A. Davidson; Neal A. Scott; Robert S. Evans; Holly. Hughes
2010-01-01
We conducted an inverse modeling analysis, using a variety of data streams (tower-based eddy covariance measurements of net ecosystem exchange, NEE, of CO2, chamber-based measurements of soil respiration, and ancillary ecological measurements of leaf area index, litterfall, and woody biomass increment) to estimate parameters and initial carbon (C...
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Estimating short-period dynamics using an extended Kalman filter
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Andrisani, Dominick
1990-01-01
An extended Kalman filter (EKF) is used to estimate the parameters of a low-order model from aircraft transient response data. The low-order model is a state space model derived from the short-period approximation of the longitudinal aircraft dynamics. The model corresponds to the pitch rate to stick force transfer function currently used in flying qualities analysis. Because of the model chosen, handling qualities information is also obtained. The parameters are estimated from flight data as well as from a six-degree-of-freedom, nonlinear simulation of the aircraft. These two estimates are then compared and the discrepancies noted. The low-order model is able to satisfactorily match both flight data and simulation data from a high-order computer simulation. The parameters obtained from the EKF analysis of flight data are compared to those obtained using frequency response analysis of the flight data. Time delays and damping ratios are compared and are in agreement. This technique demonstrates the potential to determine, in near real time, the extent of differences between computer models and the actual aircraft. Precise knowledge of these differences can help to determine the flying qualities of a test aircraft and lead to more efficient envelope expansion.
PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.
Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael
2015-07-28
obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
NASA Astrophysics Data System (ADS)
Yong, Kilyuk; Jo, Sujang; Bang, Hyochoong
This paper presents a modified Rodrigues parameter (MRP)-based nonlinear observer design to estimate bias, scale factor and misalignment of gyroscope measurements. A Lyapunov stability analysis is carried out for the nonlinear observer. Simulation is performed and results are presented illustrating the performance of the proposed nonlinear observer under the condition of persistent excitation maneuver. In addition, a comparison between the nonlinear observer and alignment Kalman filter (AKF) is made to highlight favorable features of the nonlinear observer.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
JaeHwa Koh; DuckJoo Yoon; Chang H. Oh
2010-07-01
An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
NASA Astrophysics Data System (ADS)
Duan, Chaowei; Zhan, Yafeng
2016-03-01
The output characteristics of a linear monostable system driven with a periodic signal and an additive white Gaussian noise are studied in this paper. Theoretical analysis shows that the output signal-to-noise ratio (SNR) decreases monotonously with the increasing noise intensity but the output SNR-gain is stable. Inspired by this high SNR-gain phenomenon, this paper applies the linear monostable system in the parameters estimation algorithm for phase shift keying (PSK) signals and improves the estimation performance.
Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing
2009-06-01
In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Sedaqatvand, Ramin; Nasr Esfahany, Mohsen; Behzad, Tayebeh; Mohseni, Madjid; Mardanpour, Mohammad Mahdi
2013-10-01
In this study, for the first time, the conduction-based model is extended, and then combined with Genetic Algorithm to estimate the design parameters of a MFC treating dairy wastewater. The optimized parameters are, then, validated. The estimated half-saturation potential of -0.13 V (vs. SHE) is in good agreement while the biofilm conductivity of 8.76×10(-4) mS cm(-1) is three orders of magnitude lower than that previously-reported for pure-culture biofilm. Simulations show that the ohmic and concentration overpotentials contribute almost equally in dropping cell voltage in which the concentration film and biofilm conductivity comprise the main resistances, respectively. Thus, polarization analysis and determining the controlling steps will be possible through that developed extension. This study introduces a reliable method to estimate the design parameters of a particular MFC and to characterize it. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimation of Dynamical Parameters in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark O.
2004-01-01
In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.
Tchebichef moment based restoration of Gaussian blurred images.
Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C
2016-11-10
With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Lo, Kam W
2016-05-01
The acoustic signal emitted by a turbo-prop aircraft consists of a strong narrowband tone superimposed on a broadband random component. A ground-based three-element planar acoustic array can be used to estimate the full set of flight parameters of a turbo-prop aircraft in transit by measuring the time delay (TD) between the signal received at the reference sensor and the signal received at each of the other two sensors of the array over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the reference sensor to improve the precision of the flight parameter estimates. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the aircraft velocity and altitude can be greatly reduced when IF measurements are used together with TD measurements. Two flight parameter estimation algorithms that utilize both IF and TD measurements are formulated and their performances are evaluated using both simulated and real data.
NASA Technical Reports Server (NTRS)
Shantaram, S. Pai; Gyekenyesi, John P.
1989-01-01
The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
The Market Responses to the Government Regulation of Chlorinated Solvents: A Policy Analysis
1988-10-01
in the process of statistical estimation of model parameters. The results of the estimation process applied to chlorinated solvent markets show the...93 C.5. Marginal Feedstock Cost Series Estimates for Process Share of Total Production .................................. 94 F.I...poliay context for this research. Section III provides analysis necessary to understand the chemicals involved, their production processes and costs, and
NASA Astrophysics Data System (ADS)
George, N. J.; Akpan, A. E.; Akpan, F. S.
2017-12-01
An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A
2012-02-01
The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.
LAGEOS geodetic analysis-SL7.1
NASA Technical Reports Server (NTRS)
Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.
1991-01-01
Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.
2014-01-01
As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
Introduction and application of the multiscale coefficient of variation analysis.
Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh
2017-10-01
Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Astrophysical properties of star clusters in the Magellanic Clouds homogeneously estimated by ASteCA
NASA Astrophysics Data System (ADS)
Perren, G. I.; Piatti, A. E.; Vázquez, R. A.
2017-06-01
Aims: We seek to produce a homogeneous catalog of astrophysical parameters of 239 resolved star clusters, located in the Small and Large Magellanic Clouds, observed in the Washington photometric system. Methods: The cluster sample was processed with the recently introduced Automated Stellar Cluster Analysis (ASteCA) package, which ensures both an automatized and a fully reproducible treatment, together with a statistically based analysis of their fundamental parameters and associated uncertainties. The fundamental parameters determined for each cluster with this tool, via a color-magnitude diagram (CMD) analysis, are metallicity, age, reddening, distance modulus, and total mass. Results: We generated a homogeneous catalog of structural and fundamental parameters for the studied cluster sample and performed a detailed internal error analysis along with a thorough comparison with values taken from 26 published articles. We studied the distribution of cluster fundamental parameters in both Clouds and obtained their age-metallicity relationships. Conclusions: The ASteCA package can be applied to an unsupervised determination of fundamental cluster parameters, which is a task of increasing relevance as more data becomes available through upcoming surveys. A table with the estimated fundamental parameters for the 239 clusters analyzed is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/602/A89
Astrophysical parameters of open star clusters using 2MASS JHKs data
NASA Astrophysics Data System (ADS)
Durgapal, Alok; Bisht, Devendra; Yadav, Ramakant Singh
2018-04-01
In the present analysis we have estimated the fundamental parameters of two poorly studied open star clusters, namely Teutsch 61 and Czernik 3, using 2MASS JHKs data. We have used the color-magnitude and colour-colour diagrams to determine their fundamental parameters.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2014-03-01
We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
A Functional Varying-Coefficient Single-Index Model for Functional Response Data
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2016-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540
A Functional Varying-Coefficient Single-Index Model for Functional Response Data.
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2017-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Estimation of groundwater recharge parameters by time series analysis
Naff, Richard L.; Gutjahr, Allan L.
1983-01-01
A model is proposed that relates water level fluctuations in a Dupuit aquifer to effective precipitaton at the top of the unsaturated zone. Effective precipitation, defined herein as that portion of precipitation which becomes recharge, is related to precipitation measured in a nearby gage by a two-parameter function. A second-order stationary assumption is used to connect the spectra of effective precipitation and water level fluctuations. Measured precipitation is assumed to be Gaussian, in order to develop a transfer function that relates the spectra of measured and effective precipitation. A nonlinear least squares technique is proposed for estimating parameters of the effective-precipitation function. Although sensitivity analyses indicate difficulties that may be encountered in the estimation procedure, the methods developed did yield convergent estimates for two case studies.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Direct estimation of tidally induced Earth rotation variations observed by VLBI
NASA Astrophysics Data System (ADS)
Englich, S.; Heinkelmann, R.; BOHM, J.; Schuh, H.
2009-09-01
The subject of our study is the investigation of periodical variations induced by solid Earth tides and ocean tides in Earth rotation parameters (ERP: polar motion, UT1)observed by VLBI. There are two strategies to determine the amplitudes and phases of Earth rotation variations from observations of space geodetic techniques. The common way is to derive time series of Earth rotation parameters first and to estimate amplitudes and phases in a second step. Results obtained by this means were shown in previous studies for zonal tidal variations (Englich et al.; 2008a) and variations caused by ocean tides (Englich et al.; 2008b). The alternative method is to estimate the tidal parameters directly within the VLBI data analysis procedure together with other parameters such as station coordinates, tropospheric delays, clocks etc. The purpose of this work was the application of this direct method to a combined VLBI data analysis using the software packages OCCAM (Version 6.1, Gauss-Markov-Model) and DOGSCS (Gerstl et al.; 2001). The theoretical basis and the preparatory steps for the implementation of this approach are presented here.
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Parameter identification for nonlinear aerodynamic systems
NASA Technical Reports Server (NTRS)
Pearson, Allan E.
1990-01-01
Parameter identification for nonlinear aerodynamic systems is examined. It is presumed that the underlying model can be arranged into an input/output (I/O) differential operator equation of a generic form. The algorithm estimation is especially efficient since the equation error can be integrated exactly given any I/O pair to obtain an algebraic function of the parameters. The algorithm for parameter identification was extended to the order determination problem for linear differential system. The degeneracy in a least squares estimate caused by feedback was addressed. A method of frequency analysis for determining the transfer function G(j omega) from transient I/O data was formulated using complex valued Fourier based modulating functions in contrast with the trigonometric modulating functions for the parameter estimation problem. A simulation result of applying the algorithm is given under noise-free conditions for a system with a low pass transfer function.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
Westgate, Philip M
2013-07-20
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.
Practical identifiability analysis of a minimal cardiovascular system model.
Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas
2017-01-17
Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis
Beato, M.
2013-01-01
Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101
NASA Astrophysics Data System (ADS)
Moret-Fernández, D.; Latorre, B.
2017-01-01
The water retention curve (θ(h)), which defines the relationship between the volumetric water content (θ) and the matric potential (h), is of paramount importance to characterize the hydraulic behaviour of soils. Because current methods to estimate θ(h) are, in general, tedious and time consuming, alternative procedures to determine θ(h) are needed. Using an upward infiltration curve, the main objective of this work is to present a method to determine the parameters of the van Genuchten (1980) water retention curve (α and n) from the sorptivity (S) and the β parameter defined in the 1D infiltration equation proposed by Haverkamp et al. (1994). The first specific objective is to present an equation, based on the Haverkamp et al. (1994) analysis, which allows describing an upward infiltration process. Secondary, assuming a known saturated hydraulic conductivity, Ks, calculated on a finite soil column by the Darcy's law, a numerical procedure to calculate S and β by the inverse analysis of an exfiltration curve is presented. Finally, the α and n values are numerically calculated from Ks, S and β. To accomplish the first specific objective, cumulative upward infiltration curves simulated with HYDRUS-1D for sand, loam, silt and clay soils were compared to those calculated with the proposed equation, after applying the corresponding β and S calculated from the theoretical Ks, α and n. The same curves were used to: (i) study the influence of the exfiltration time on S and β estimations, (ii) evaluate the limits of the inverse analysis, and (iii) validate the feasibility of the method to estimate α and n. Next, the θ(h) parameters estimated with the numerical method on experimental soils were compared to those obtained with pressure cells. The results showed that the upward infiltration curve could be correctly described by the modified Haverkamp et al. (1994) equation. While S was only affected by early-time exfiltration data, the β parameter had a significant influence on the long-time exfiltration curve, which accuracy increased with time. The 1D infiltration model was only suitable for β < 1.7 (sand, loam and silt). After omitting the clay soil, an excellent relationship (R2 = 0.99, p < 0.005) was observed between the theoretical α and n values of the synthetic soils and those estimated from the inverse analysis. Consistent results, with a significant relationship (p < 0.001) between the n values estimated with the pressure cell and the upward infiltration analysis, were also obtained on the experimental soils.
Noise elimination algorithm for modal analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, X. X., E-mail: baoxingxian@upc.edu.cn; Li, C. L.; Xiong, C. B.
2015-07-27
Modal analysis is an ongoing interdisciplinary physical issue. Modal parameters estimation is applied to determine the dynamic characteristics of structures under vibration excitation. Modal analysis is more challenging for the measured vibration response signals are contaminated with noise. This study develops a mathematical algorithm of structured low rank approximation combined with the complex exponential method to estimate the modal parameters. Physical experiments using a steel cantilever beam with ten accelerometers mounted, excited by an impulse load, demonstrate that this method can significantly eliminate noise from measured signals and accurately identify the modal frequencies and damping ratios. This study provides amore » fundamental mechanism of noise elimination using structured low rank approximation in physical fields.« less
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
The J3 SCR model applied to resonant converter simulation
NASA Technical Reports Server (NTRS)
Avant, R. L.; Lee, F. C. Y.
1985-01-01
The J3 SCR model is a continuous topology computer model for the SCR. Its circuit analog and parameter estimation procedure are uniformly applicable to popular computer-aided design and analysis programs such as SPICE2 and SCEPTRE. The circuit analog is based on the intrinsic three pn junction structure of the SCR. The parameter estimation procedure requires only manufacturer's specification sheet quantities as a data base.
Signal recognition and parameter estimation of BPSK-LFM combined modulation
NASA Astrophysics Data System (ADS)
Long, Chao; Zhang, Lin; Liu, Yu
2015-07-01
Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.
Sudhanshu Panda; Devendra Amatya; Young Kim; Ge Sun
2016-01-01
Evapotranspiration (ET) is one of the most important hydrologic parameters for vegetation growth, carbon sequestration, and other associated biodiversity study and analysis. Plant stomatal conductance, leaf area index, canopy temperature, soil moisture, and wind speed values generally correlate well with ET. It is difficult to estimate these hydrologic parameters of...
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
Hyperspectral signature analysis of skin parameters
NASA Astrophysics Data System (ADS)
Vyas, Saurabh; Banerjee, Amit; Garza, Luis; Kang, Sewon; Burlina, Philippe
2013-02-01
The temporal analysis of changes in biological skin parameters, including melanosome concentration, collagen concentration and blood oxygenation, may serve as a valuable tool in diagnosing the progression of malignant skin cancers and in understanding the pathophysiology of cancerous tumors. Quantitative knowledge of these parameters can also be useful in applications such as wound assessment, and point-of-care diagnostics, amongst others. We propose an approach to estimate in vivo skin parameters using a forward computational model based on Kubelka-Munk theory and the Fresnel Equations. We use this model to map the skin parameters to their corresponding hyperspectral signature. We then use machine learning based regression to develop an inverse map from hyperspectral signatures to skin parameters. In particular, we employ support vector machine based regression to estimate the in vivo skin parameters given their corresponding hyperspectral signature. We build on our work from SPIE 2012, and validate our methodology on an in vivo dataset. This dataset consists of 241 signatures collected from in vivo hyperspectral imaging of patients of both genders and Caucasian, Asian and African American ethnicities. In addition, we also extend our methodology past the visible region and through the short-wave infrared region of the electromagnetic spectrum. We find promising results when comparing the estimated skin parameters to the ground truth, demonstrating good agreement with well-established physiological precepts. This methodology can have potential use in non-invasive skin anomaly detection and for developing minimally invasive pre-screening tools.
Massive data compression for parameter-dependent covariance matrices
NASA Astrophysics Data System (ADS)
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz; Babak, Stanislav; Krolak, Andrzej
We present data analysis methods used in the detection and estimation of parameters of gravitational-wave signals from the white dwarf binaries in the mock LISA data challenge. Our main focus is on the analysis of challenge 3.1, where the gravitational-wave signals from more than 6x10{sup 7} Galactic binaries were added to the simulated Gaussian instrumental noise. The majority of the signals at low frequencies are not resolved individually. The confusion between the signals is strongly reduced at frequencies above 5 mHz. Our basic data analysis procedure is the maximum likelihood detection method. We filter the data through the template bankmore » at the first step of the search, then we refine parameters using the Nelder-Mead algorithm, we remove the strongest signal found and we repeat the procedure. We detect reliably and estimate parameters accurately of more than ten thousand signals from white dwarf binaries.« less
Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.
2016-01-01
There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.
Lo, Kam W
2017-03-01
When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
Bae, Youngoh; Yoo, Byeong Wook; Lee, Jung Chan; Kim, Hee Chan
2017-05-01
Detection and diagnosis based on extracting features and classification using electroencephalography (EEG) signals are being studied vigorously. A network analysis of time series EEG signal data is one of many techniques that could help study brain functions. In this study, we analyze EEG to diagnose alcoholism. We propose a novel methodology to estimate the differences in the status of the brain based on EEG data of normal subjects and data from alcoholics by computing many parameters stemming from effective network using Granger causality. Among many parameters, only ten parameters were chosen as final candidates. By the combination of ten graph-based parameters, our results demonstrate predictable differences between alcoholics and normal subjects. A support vector machine classifier with best performance had 90% accuracy with sensitivity of 95.3%, and specificity of 82.4% for differentiating between the two groups.
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-05-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
NASA Astrophysics Data System (ADS)
Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati
2018-02-01
The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'
NASA Astrophysics Data System (ADS)
Ottersten, B. E.; Viberg, M.; Kailath, T.
1989-11-01
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
Near real-time estimation of the seismic source parameters in a compressed domain
NASA Astrophysics Data System (ADS)
Rodriguez, Ismael A. Vera
Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.
Effect of microwave radiation on Jayadhar cotton fibers: WAXS studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niranjana, A. R., E-mail: arnphysics@gmail.com; Mahesh, S. S., E-mail: arnphysics@gmail.com; Divakara, S., E-mail: arnphysics@gmail.com
Thermal effect in the form of micro wave energy on Jayadhar cotton fiber has been investigated. Microstructural parameters have been estimated using wide angle x-ray scattering (WAXS) data and line profile analysis program developed by us. Physical properties like tensile strength are correlated with X-ray results. We observe that the microwave radiation do affect significantly many parameters and we have suggested a multivariate analysis of these parameters to arrive at a significant result.
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals
NASA Astrophysics Data System (ADS)
De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.
2016-08-01
When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Earth Rotation Parameters from DSN VLBI: 1994
NASA Technical Reports Server (NTRS)
Steppe, J. A.; Oliveau, S. H.; Sovers, O. J.
1994-01-01
In this report, Earth Rotation Parameter (ERP) estimates ahve been obtained from an analysis of Deep Space Network (DSN) VLBI data that directly aligns its celestial and terrestrial reference frames with those of the International Earth Rotation Service (IERS).
Chapter 8: Demographic characteristics and population modeling
Scott H. Stoleson; Mary J. Whitfield; Mark K. Sogge
2000-01-01
An understanding of the basic demography of a species is necessary to estimate and evaluate population trends. The relative impact of different demographic parameters on growth rates can be assessed through a sensitivity analysis, in which different parameters are altered singly to assess the effect on population growth. Identification of critical parameters can allow...
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Gyekenyesi, John P.
1988-01-01
The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
Time-dependent reliability analysis of ceramic engine components
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.
WTAQ - A computer program for aquifer-test analysis of confined and unconfined aquifers
Barlow, P.M.; Moench, A.F.
2004-01-01
Computer program WTAQ was developed to implement a Laplace-transform analytical solution for axial-symmetric flow to a partially penetrating, finite-diameter well in a homogeneous and anisotropic unconfined (water-table) aquifer. The solution accounts for wellbore storage and skin effects at the pumped well, delayed response at an observation well, and delayed or instantaneous drainage from the unsaturated zone. For the particular case of zero drainage from the unsaturated zone, the solution simplifies to that of axial-symmetric flow in a confined aquifer. WTAQ calculates theoretical time-drawdown curves for the pumped well and observation wells and piezometers. The theoretical curves are used with measured time-drawdown data to estimate hydraulic parameters of confined or unconfined aquifers by graphical type-curve methods or by automatic parameter-estimation methods. Parameters that can be estimated are horizontal and vertical hydraulic conductivity, specific storage, and specific yield. A sample application illustrates use of WTAQ for estimating hydraulic parameters of a hypothetical, unconfined aquifer by type-curve methods. Copyright ASCE 2004.
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
Progress in Turbulence Detection via GNSS Occultation Data
NASA Technical Reports Server (NTRS)
Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.
2012-01-01
The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.
[Modern principles of the geriatric analysis in medicine].
Volobuev, A N; Zaharova, N O; Romanchuk, N P; Romanov, D V; Romanchuk, P I; Adyshirin-Zade, K A
2016-01-01
The offered methodological principles of the geriatric analysis in medicine enables to plan economic parameters of social protection of the population, necessary amount of medical help financing, to define a structure of the qualified medical personnel training. It is shown that personal health and cognitive longevity of the person depend on the adequate system geriatric analysis and use of biological parameters monitoring in time. That allows estimate efficiency of the combined individual treatment. The geriatric analysis and in particular its genetic-mathematical component aimed at reliability and objectivity of an estimation of the person life expectancy in the country and in region due to the account of influence of mutagen factors as on a gene of the person during his live, and on a population as a whole.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
PyCoTools: A Python Toolbox for COPASI.
Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P
2018-05-22
COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.
2015-01-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
The use of generalized estimating equations in the analysis of motor vehicle crash data.
Hutchings, Caroline B; Knight, Stacey; Reading, James C
2003-01-01
The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.
NASA Astrophysics Data System (ADS)
Anderson, Christian Carl
This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.
Bouhrara, Mustapha; Spencer, Richard G.
2015-01-01
Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810
Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.
Scheuss, V; Neher, E
2001-10-01
Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli.
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Regionalized rainfall-runoff model to estimate low flow indices
NASA Astrophysics Data System (ADS)
Garcia, Florine; Folton, Nathalie; Oudin, Ludovic
2016-04-01
Estimating low flow indices is of paramount importance to manage water resources and risk assessments. These indices are derived from river discharges which are measured at gauged stations. However, the lack of observations at ungauged sites bring the necessity of developing methods to estimate these low flow indices from observed discharges in neighboring catchments and from catchment characteristics. Different estimation methods exist. Regression or geostatistical methods performed on the low flow indices are the most common types of methods. Another less common method consists in regionalizing rainfall-runoff model parameters, from catchment characteristics or by spatial proximity, to estimate low flow indices from simulated hydrographs. Irstea developed GR2M-LoiEau, a conceptual monthly rainfall-runoff model, combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on only two parameters, which are regionalized and mapped throughout France. This model allows to cartography monthly reference low flow indices. The inputs data come from SAFRAN, the distributed mesoscale atmospheric analysis system, which provides daily solid and liquid precipitation and temperature data from everywhere in the French territory. To exploit fully these data and to estimate daily low flow indices, a new version of GR-LoiEau has been developed at a daily time step. The aim of this work is to develop and regionalize a GR-LoiEau model that can provide any daily, monthly or annual estimations of low flow indices, yet keeping only a few parameters, which is a major advantage to regionalize them. This work includes two parts. On the one hand, a daily conceptual rainfall-runoff model is developed with only three parameters in order to simulate daily and monthly low flow indices, mean annual runoff and seasonality. On the other hand, different regionalization methods, based on spatial proximity and similarity, are tested to estimate the model parameters and to simulate low flow indices in ungauged sites. The analysis is carried out on 691 French catchments that are representative of various hydro-meteorological behaviors. The results are validated with a cross-validation procedure and are compared with the ones obtained with GR4J, a conceptual rainfall-runoff model, which already provides daily estimations, but involves four parameters that cannot easily be regionalized.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Chloramine demand estimation using surrogate chemical and microbiological parameters.
Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose
2017-07-01
A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-01-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
First-order kinetic gas generation model parameters for wet landfills.
Faour, Ayman A; Reinhart, Debra R; You, Huaxin
2007-01-01
Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.
NASA Astrophysics Data System (ADS)
Kang, Kwang-Song; Hu, Nai-Lian; Sin, Chung-Sik; Rim, Song-Ho; Han, Eun-Cheol; Kim, Chol-Nam
2017-08-01
It is very important to obtain the mechanical paramerters of rock mass for excavation design, support design, slope design and stability analysis of the underground structure. In order to estimate the mechanical parameters of rock mass exactly, a new method of combining a geological strength index (GSI) system with intelligent displacment back analysis is proposed in this paper. Firstly, average spacing of joints (d) and rock mass block rating (RBR, a new quantitative factor), surface condition rating (SCR) and joint condition factor (J c) are obtained on in situ rock masses using the scanline method, and the GSI values of rock masses are obtained from a new quantitative GSI chart. A correction method of GSI value is newly introduced by considering the influence of joint orientation and groundwater on rock mass mechanical properties, and then value ranges of rock mass mechanical parameters are chosen by the Hoek-Brown failure criterion. Secondly, on the basis of the measurement result of vault settlements and horizontal convergence displacements of an in situ tunnel, optimal parameters are estimated by combination of genetic algorithm (GA) and numerical simulation analysis using FLAC3D. This method has been applied in a lead-zinc mine. By utilizing the improved GSI quantization, correction method and displacement back analysis, the mechanical parameters of the ore body, hanging wall and footwall rock mass were determined, so that reliable foundations were provided for mining design and stability analysis.
Quantifying lost information due to covariance matrix estimation in parameter inference
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2017-02-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.
Static Strength Characteristics of Mechanically Fastened Composite Joints
NASA Technical Reports Server (NTRS)
Fox, D. E.; Swaim, K. W.
1999-01-01
The analysis of mechanically fastened composite joints presents a great challenge to structural analysts because of the large number of parameters that influence strength. These parameters include edge distance, width, bolt diameter, laminate thickness, ply orientation, and bolt torque. The research presented in this report investigates the influence of some of these parameters through testing and analysis. A methodology is presented for estimating the strength of the bolt-hole based on classical lamination theory using the Tsai-Hill failure criteria and typical bolthole bearing analytical methods.
Regionalising MUSLE factors for application to a data-scarce catchment
NASA Astrophysics Data System (ADS)
Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani
2018-04-01
The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Kumar, B Shiva; Venkateswarlu, Ch
2014-08-01
The complex nature of biological reactions in biofilm reactors often poses difficulties in analyzing such reactors experimentally. Mathematical models could be very useful for their design and analysis. However, application of biofilm reactor models to practical problems proves somewhat ineffective due to the lack of knowledge of accurate kinetic models and uncertainty in model parameters. In this work, we propose an inverse modeling approach based on tabu search (TS) to estimate the parameters of kinetic and film thickness models. TS is used to estimate these parameters as a consequence of the validation of the mathematical models of the process with the aid of measured data obtained from an experimental fixed-bed anaerobic biofilm reactor involving the treatment of pharmaceutical industry wastewater. The results evaluated for different modeling configurations of varying degrees of complexity illustrate the effectiveness of TS for accurate estimation of kinetic and film thickness model parameters of the biofilm process. The results show that the two-dimensional mathematical model with Edward kinetics (with its optimum parameters as mu(max)rho(s)/Y = 24.57, Ks = 1.352 and Ki = 102.36) and three-parameter film thickness expression (with its estimated parameters as a = 0.289 x 10(-5), b = 1.55 x 10(-4) and c = 15.2 x 10(-6)) better describes the biofilm reactor treating the industry wastewater.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
An Empirical Bayes Approach to Spatial Analysis
NASA Technical Reports Server (NTRS)
Morris, C. N.; Kostal, H.
1983-01-01
Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui
2016-01-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
NASA Astrophysics Data System (ADS)
Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.
2016-12-01
Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
The method of trend analysis of parameters time series of gas-turbine engine state
NASA Astrophysics Data System (ADS)
Hvozdeva, I.; Myrhorod, V.; Derenh, Y.
2017-10-01
This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2018-03-01
Chaos control and synchronization of chaotic systems is seemingly a challenging problem and has got a lot of attention in recent years due to its numerous applications in science and industry. This paper concentrates on the control and synchronization problem of the three-dimensional (3D) Zhang chaotic system. At first, an adaptive control law and a parameter estimation law are achieved for controlling the behavior of the Zhang chaotic system. Then, non-identical synchronization of Zhang chaotic system is provided with considering the Lü chaotic system as the follower system. The synchronization problem and parameters identification are achieved by introducing an adaptive control law and a parameters estimation law. Stability analysis of the proposed method is proved by the Lyapanov stability theorem. In addition, the convergence of the estimated parameters to their truly unknown values are evaluated. Finally, some numerical simulations are carried out to illustrate and to validate the effectiveness of the suggested method.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis
NASA Astrophysics Data System (ADS)
Springer, Everett P.; Cundy, Terrance W.
1987-02-01
Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Responder analysis without dichotomization.
Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li
2016-01-01
In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.
NASA Astrophysics Data System (ADS)
Zhong, Chongquan; Lin, Yaoyao
2017-11-01
In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang
2017-07-01
Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.
ERIC Educational Resources Information Center
Delaney, Michael F.
1984-01-01
This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.
Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.
NASA Astrophysics Data System (ADS)
Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.
2006-01-01
This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Steinmetz, G. G.
1972-01-01
A method of parameter extraction for stability and control derivatives of aircraft from flight test data, implementing maximum likelihood estimation, has been developed and successfully applied to actual lateral flight test data from a modern sophisticated jet fighter. This application demonstrates the important role played by the analyst in combining engineering judgment and estimator statistics to yield meaningful results. During the analysis, the problems of uniqueness of the extracted set of parameters and of longitudinal coupling effects were encountered and resolved. The results for all flight runs are presented in tabular form and as time history comparisons between the estimated states and the actual flight test data.
Selection of noisy measurement locations for error reduction in static parameter identification
NASA Astrophysics Data System (ADS)
Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.
1992-09-01
An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.
Kamoi, Shun; Pretty, Christopher; Balmer, Joel; Davidson, Shaun; Pironet, Antoine; Desaive, Thomas; Shaw, Geoffrey M; Chase, J Geoffrey
2017-04-24
Pressure contour analysis is commonly used to estimate cardiac performance for patients suffering from cardiovascular dysfunction in the intensive care unit. However, the existing techniques for continuous estimation of stroke volume (SV) from pressure measurement can be unreliable during hemodynamic instability, which is inevitable for patients requiring significant treatment. For this reason, pressure contour methods must be improved to capture changes in vascular properties and thus provide accurate conversion from pressure to flow. This paper presents a novel pressure contour method utilizing pulse wave velocity (PWV) measurement to capture vascular properties. A three-element Windkessel model combined with the reservoir-wave concept are used to decompose the pressure contour into components related to storage and flow. The model parameters are identified beat-to-beat from the water-hammer equation using measured PWV, wave component of the pressure, and an estimate of subject-specific aortic dimension. SV is then calculated by converting pressure to flow using identified model parameters. The accuracy of this novel method is investigated using data from porcine experiments (N = 4 Pietrain pigs, 20-24.5 kg), where hemodynamic properties were significantly altered using dobutamine, fluid administration, and mechanical ventilation. In the experiment, left ventricular volume was measured using admittance catheter, and aortic pressure waveforms were measured at two locations, the aortic arch and abdominal aorta. Bland-Altman analysis comparing gold-standard SV measured by the admittance catheter and estimated SV from the novel method showed average limits of agreement of ±26% across significant hemodynamic alterations. This result shows the method is capable of estimating clinically acceptable absolute SV values according to Critchely and Critchely. The novel pressure contour method presented can accurately estimate and track SV even when hemodynamic properties are significantly altered. Integrating PWV measurements into pressure contour analysis improves identification of beat-to-beat changes in Windkessel model parameters, and thus, provides accurate estimate of blood flow from measured pressure contour. The method has great potential for overcoming weaknesses associated with current pressure contour methods for estimating SV.
Dehkordi, Parastoo; Garde, Ainara; Karlen, Walter; Wensley, David; Ansermino, J Mark; Dumont, Guy A
2013-01-01
Heart Rate Variability (HRV), the variation of time intervals between heartbeats, is one of the most promising and widely used quantitative markers of autonomic activity. Traditionally, HRV is measured as the series of instantaneous cycle intervals obtained from the electrocardiogram (ECG). In this study, we investigated the estimation of variation in heart rate from a photoplethysmography (PPG) signal, called pulse rate variability (PRV), and assessed its accuracy as an estimate of HRV in children with and without sleep disordered breathing (SDB). We recorded raw PPGs from 72 children using the Phone Oximeter, an oximeter connected to a mobile phone. Full polysomnography including ECG was simultaneously recorded for each subject. We used correlation and Bland-Altman analysis for comparing the parameters of HRV and PRV between two groups of children. Significant correlation (r > 0.90, p < 0.05) and close agreement were found between HRV and PRV for mean intervals, standard deviation of intervals (SDNN) and the root-mean square of the difference of successive intervals (RMSSD). However Bland-Altman analysis showed a large divergence for LF/HF ratio parameter. In addition, children with SDB had depressed SDNN and RMSSD and elevated LF/HF in comparison to children without SDB. In conclusion, PRV provides the accurate estimate of HRV in time domain analysis but does not reflect precise estimation for parameters in frequency domain.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
A new multistage groundwater transport inverse method: presentation, evaluation, and implications
Anderman, Evan R.; Hill, Mary C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
NASA Astrophysics Data System (ADS)
Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo
2015-04-01
The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Hierarchical models and Bayesian analysis of bird survey information
Sauer, J.R.; Link, W.A.; Royle, J. Andrew; Ralph, C. John; Rich, Terrell D.
2005-01-01
Summary of bird survey information is a critical component of conservation activities, but often our summaries rely on statistical methods that do not accommodate the limitations of the information. Prioritization of species requires ranking and analysis of species by magnitude of population trend, but often magnitude of trend is a misleading measure of actual decline when trend is poorly estimated. Aggregation of population information among regions is also complicated by varying quality of estimates among regions. Hierarchical models provide a reasonable means of accommodating concerns about aggregation and ranking of quantities of varying precision. In these models the need to consider multiple scales is accommodated by placing distributional assumptions on collections of parameters. For collections of species trends, this allows probability statements to be made about the collections of species-specific parameters, rather than about the estimates. We define and illustrate hierarchical models for two commonly encountered situations in bird conservation: (1) Estimating attributes of collections of species estimates, including ranking of trends, estimating number of species with increasing populations, and assessing population stability with regard to predefined trend magnitudes; and (2) estimation of regional population change, aggregating information from bird surveys over strata. User-friendly computer software makes hierarchical models readily accessible to scientists.
On Using Exponential Parameter Estimators with an Adaptive Controller
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.
Nonlinear estimation of parameters in biphasic Arrhenius plots.
Puterman, M L; Hrboticky, N; Innis, S M
1988-05-01
This paper presents a formal procedure for the statistical analysis of data on the thermotropic behavior of membrane-bound enzymes generated using the Arrhenius equation and compares the analysis to several alternatives. Data is modeled by a bent hyperbola. Nonlinear regression is used to obtain estimates and standard errors of the intersection of line segments, defined as the transition temperature, and slopes, defined as energies of activation of the enzyme reaction. The methodology allows formal tests of the adequacy of a biphasic model rather than either a single straight line or a curvilinear model. Examples on data concerning the thermotropic behavior of pig brain synaptosomal acetylcholinesterase are given. The data support the biphasic temperature dependence of this enzyme. The methodology represents a formal procedure for statistical validation of any biphasic data and allows for calculation of all line parameters with estimates of precision.
Chai, Rui; Xu, Li-Sheng; Yao, Yang; Hao, Li-Ling; Qi, Lin
2017-01-01
This study analyzed ascending branch slope (A_slope), dicrotic notch height (Hn), diastolic area (Ad) and systolic area (As) diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), subendocardial viability ratio (SEVR), waveform parameter (k), stroke volume (SV), cardiac output (CO), and peripheral resistance (RS) of central pulse wave invasively and non-invasively measured. Invasively measured parameters were compared with parameters measured from brachial pulse waves by regression model and transfer function model. Accuracy of parameters estimated by regression and transfer function model, was compared too. Findings showed that k value, central pulse wave and brachial pulse wave parameters invasively measured, correlated positively. Regression model parameters including A_slope, DBP, SEVR, and transfer function model parameters had good consistency with parameters invasively measured. They had same effect of consistency. SBP, PP, SV, and CO could be calculated through the regression model, but their accuracies were worse than that of transfer function model.
Identifiability of PBPK Models with Applications to Dimethylarsinic Acid Exposure
Any statistical model should be identifiable in order for estimates and tests using it to be meaningful. We consider statistical analysis of physiologically-based pharmacokinetic (PBPK) models in which parameters cannot be estimated precisely from available data, and discuss diff...
NASA Astrophysics Data System (ADS)
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.
Estimating fracture spacing from natural tracers in shale-gas production
NASA Astrophysics Data System (ADS)
Bauer, S. J.; McKenna, S. A.; Heath, J. E.; Gardner, P.
2012-12-01
Resource appraisal and long-term recovery potential of shale gas relies on the characteristics of the fracture networks created within the formation. Both well testing and analysis of micro-seismic data can provide information on fracture characteristics, but approaches that directly utilize observations of gas transport through the fractures are not well-developed. We examine transport of natural tracers and analyze the breakthrough curves (BTC's) of these tracers with a multi-rate mass transfer (MMT) model to elucidate fracture characteristics. The focus here is on numerical simulation studies to determine constraints on the ability to accurately estimate fracture network characteristics as a function of the diffusion coefficients of the natural tracers, the number and timing of observations, the flow rates from the well, and the noise in the observations. Traditional tracer testing approaches for dual-porosity systems analyze the BTC of an injected tracer to obtain fracture spacing considering a single spacing value. An alternative model is the MMT model where diffusive mass transfer occurs simultaneously over a range of matrix block sizes defined by a statistical distribution (e.g., log-normal, gamma, or power-law). The goal of the estimation is defining the parameters of the fracture spacing distribution. The MMT model has not yet been applied to analysis of natural in situ natural tracers. Natural tracers are omnipresent in the subsurface, potentially obviating the needed for introduced tracers, and could be used to improve upon fracture characteristics estimated from pressure transient and decline curve production analysis. Results of this study provide guidance for data collection and analysis of natural tracers in fractured shale formations. Parameter estimation on simulated BTC's will provide guidance on the necessary timing of BTC sampling in field experiments. The MMT model can result in non-unique or nonphysical parameter estimates. We address this with Bayesian estimation approaches that can define uncertainty in estimated parameters as a posterior probability distribution. We will also use Bayesian estimation to examine model identifiability (e.g., selecting between parametric distributions of fracture spacing) from various BTC's. Application of the MMT model to natural tracers and hydraulic fractures in shale will require extension of the model to account for partitioning of the tracers between multiple phases and different mass transfer behavior in mixed gas-liquid (e.g., oil or groundwater rich) systems. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Zonta, Zivko J; Flotats, Xavier; Magrí, Albert
2014-08-01
The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Analysis of Air Traffic Track Data with the AutoBayes Synthesis System
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.
2010-01-01
The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.
Urdapilleta, E; Bellotti, M; Bonetto, F J
2006-10-01
In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.
Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis
ERIC Educational Resources Information Center
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia
2016-01-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…
Influence of the model's degree of freedom on human body dynamics identification.
Maita, Daichi; Venture, Gentiane
2013-01-01
In fields of sports and rehabilitation, opportunities of using motion analysis of the human body have dramatically increased. To analyze the motion dynamics, a number of subject specific parameters and measurements are required. For example the contact forces measurement and the inertial parameters of each segment of the human body are necessary to compute the joint torques. In this study, in order to perform accurate dynamic analysis we propose to identify the inertial parameters of the human body and to evaluate the influence of the model's number of degrees of freedom (DoF) on the results. We use a method to estimate the inertial parameters without torque sensor, using generalized coordinates of the base link, joint angles and external forces information. We consider a 34DoF model, a 58DoF model, as well as the case when the human is manipulating a tool (here a tennis racket). We compare the obtained in results in terms of contact force estimation.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Ultrasonic data compression via parameter estimation.
Cardoso, Guilherme; Saniie, Jafar
2005-02-01
Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.
Design and parameter estimation of hybrid magnetic bearings for blood pump applications
NASA Astrophysics Data System (ADS)
Lim, Tau Meng; Zhang, Dongsheng; Yang, Juanjuan; Cheng, Shanbao; Low, Sze Hsien; Chua, Leok Poh; Wu, Xiaowei
2009-10-01
This paper discusses the design and parameter estimation of the dynamics characteristics of a high-speed hybrid magnetic bearings (HMBs) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet (PM) brushless and sensorless DC motor. It is levitated by two HMBs at both ends in five-degree-of-freedom with proportional-integral-derivative (PID) controllers; among which four radial directions are actively controlled and one axial direction is passively controlled. Test results show that the rotor can be stably supported to speeds of 14,000 rpm. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMBs system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air—in both the radial and axial directions. The radial stiffness of the HMBs is compared to the Ansoft's Maxwell 2D/3D finite element magnetostatic results. Experimental estimation showed that the dynamics characteristics of the HMBs system are dominated by the frequency-dependent stiffness coefficients. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamics properties under normal operating conditions with fluid.
Important observations and parameters for a salt water intrusion model
Shoemaker, W.B.
2004-01-01
Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.
Important observations and parameters for a salt water intrusion model.
Shoemaker, W Barclay
2004-01-01
Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.
HRV analysis and blood pressure monitoring on weighing scale using BCG.
Shin, Jae Hyuk; Park, Kwang Suk
2012-01-01
Using the Ballistocardiogram(BCG) measured on weighing scale, heart rate variability(HRV) and blood pressure were estimated. BCG was measured while subjects were on weighing scale in resting state and under the Valsalva maneuver and static exercise condition to induce the change in cardiac autonomic rhythm. Time domain, frequency domain and nonlinear HRV parameters were estimated from the measured BCG and compared with the ones calculated from ECG measured simultaneously. For blood pressure(BP) estimation, ECG was measured additionally on the feet using dry electrodes simultaneously installed on weighing scale and R-J intervals were extracted as a BP correlated parameter at every beat cycle. HRV estimation results shows the correlation higher than 0.97, and the estimated BP was similar to the measured BP with a reliable correlations.
A New Mixing Diagnostic and Gulf Oil Spill Movement
2010-10-01
could be used with new estimates of the suppression parameter to yield appreciably larger estimates of the hydrogen content in the shallow lunar ...paradigm for mixing in fluid flows with simple time dependence. Its skeletal structure is based on analysis of invariant attracting and repelling...continues to the present day. Model analysis and forecasts are compared to independent (nonassimilated) infrared frontal po- sitions and drifter trajectories
Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics
NASA Astrophysics Data System (ADS)
García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team
2016-06-01
We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey
Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey
2018-05-01
Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.
2018-04-01
Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.
An improved Rosetta pedotransfer function and evaluation in earth system models
NASA Astrophysics Data System (ADS)
Zhang, Y.; Schaap, M. G.
2017-12-01
Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Ishizuka, Tomohiro; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2015-07-01
We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green, blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. We performed simultaneous recordings of spectral diffuse reflectance images and of the electrophysiological signals for in vivo exposed rat brain during the cortical spreading depression evoked by the topical application of KCl. Changes in the total hemoglobin concentration and the tissue oxygen saturation imply the temporary change in cerebral blood flow during CSD. Change in the reduced scattering coefficient was observed before the profound increase in the total hemoglobin concentration, and its occurrence was synchronized with the negative dc shift of the local field potential.
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1984-01-01
Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
Direct system parameter identification of mechanical structures with application to modal analysis
NASA Technical Reports Server (NTRS)
Leuridan, J. M.; Brown, D. L.; Allemang, R. J.
1982-01-01
In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Taimouri, Vahid; Afacan, Onur; Perez-Rossello, Jeannette M.; Callahan, Michael J.; Mulkern, Robert V.; Warfield, Simon K.; Freiman, Moti
2015-01-01
Purpose: To evaluate the effect of the spatially constrained incoherent motion (SCIM) method on improving the precision and robustness of fast and slow diffusion parameter estimates from diffusion-weighted MRI in liver and spleen in comparison to the independent voxel-wise intravoxel incoherent motion (IVIM) model. Methods: We collected diffusion-weighted MRI (DW-MRI) data of 29 subjects (5 healthy subjects and 24 patients with Crohn’s disease in the ileum). We evaluated parameters estimates’ robustness against different combinations of b-values (i.e., 4 b-values and 7 b-values) by comparing the variance of the estimates obtained with the SCIM and the independent voxel-wise IVIM model. We also evaluated the improvement in the precision of parameter estimates by comparing the coefficient of variation (CV) of the SCIM parameter estimates to that of the IVIM. Results: The SCIM method was more robust compared to IVIM (up to 70% in liver and spleen) for different combinations of b-values. Also, the CV values of the parameter estimations using the SCIM method were significantly lower compared to repeated acquisition and signal averaging estimated using IVIM, especially for the fast diffusion parameter in liver (CVIV IM = 46.61 ± 11.22, CVSCIM = 16.85 ± 2.160, p < 0.001) and spleen (CVIV IM = 95.15 ± 19.82, CVSCIM = 52.55 ± 1.91, p < 0.001). Conclusions: The SCIM method characterizes fast and slow diffusion more precisely compared to the independent voxel-wise IVIM model fitting in the liver and spleen. PMID:25832079
Yadollahi, Azadeh; Montazeri, Aman; Azarbarzin, Ali; Moussavi, Zahra
2013-03-01
Tracheal respiratory sound analysis is a simple and non-invasive way to study the pathophysiology of the upper airway and has recently been used for acoustic estimation of respiratory flow and sleep apnea diagnosis. However in none of the previous studies was the respiratory flow-sound relationship studied in people with obstructive sleep apnea (OSA), nor during sleep. In this study, we recorded tracheal sound, respiratory flow, and head position from eight non-OSA and 10 OSA individuals during sleep and wakefulness. We compared the flow-sound relationship and variations in model parameters from wakefulness to sleep within and between the two groups. The results show that during both wakefulness and sleep, flow-sound relationship follows a power law but with different parameters. Furthermore, the variations in model parameters may be representative of the OSA pathology. The other objective of this study was to examine the accuracy of respiratory flow estimation algorithms during sleep: we investigated two approaches for calibrating the model parameters using the known data recorded during either wakefulness or sleep. The results show that the acoustical respiratory flow estimation parameters change from wakefulness to sleep. Therefore, if the model is calibrated using wakefulness data, although the estimated respiratory flow follows the relative variations of the real flow, the quantitative flow estimation error would be high during sleep. On the other hand, when the calibration parameters are extracted from tracheal sound and respiratory flow recordings during sleep, the respiratory flow estimation error is less than 10%.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
NASA Astrophysics Data System (ADS)
Barsuk, Alexandr A.; Paladi, Florentin
2018-04-01
The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.
Kumar, K Vasanth; Sivanesan, S
2005-08-31
Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
Kumar, K Vasanth
2006-08-21
The experimental equilibrium data of malachite green onto activated carbon were fitted to the Freundlich, Langmuir and Redlich-Peterson isotherms by linear and non-linear method. A comparison between linear and non-linear of estimating the isotherm parameters was discussed. The four different linearized form of Langmuir isotherm were also discussed. The results confirmed that the non-linear method as a better way to obtain isotherm parameters. The best fitting isotherm was Langmuir and Redlich-Peterson isotherm. Redlich-Peterson is a special case of Langmuir when the Redlich-Peterson isotherm constant g was unity.
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6-18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R (2) had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status.
Curvature constraints from large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dio, Enea Di; Montanari, Francesco; Raccanelli, Alvise
We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter Ω {sub K} with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependentmore » power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.« less
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Distributed Estimation using Bayesian Consensus Filtering
2014-06-06
Convergence rate analysis of distributed gossip (linear parameter) estimation: Fundamental limits and tradeoffs,” IEEE J. Sel. Topics Signal Process...Dimakis, S. Kar, J. Moura, M. Rabbat, and A. Scaglione, “ Gossip algorithms for distributed signal processing,” Proc. of the IEEE, vol. 98, no. 11, pp
Phobos laser ranging: Numerical Geodesy experiments for Martian system science
NASA Astrophysics Data System (ADS)
Dirkx, D.; Vermeersen, L. L. A.; Noomen, R.; Visser, P. N. A. M.
2014-09-01
Laser ranging is emerging as a technology for use over (inter)planetary distances, having the advantage of high (mm-cm) precision and accuracy and low mass and power consumption. We have performed numerical simulations to assess the science return in terms of geodetic observables of a hypothetical Phobos lander performing active two-way laser ranging with Earth-based stations. We focus our analysis on the estimation of Phobos and Mars gravitational, tidal and rotational parameters. We explicitly include systematic error sources in addition to uncorrelated random observation errors. This is achieved through the use of consider covariance parameters, specifically the ground station position and observation biases. Uncertainties for the consider parameters are set at 5 mm and at 1 mm for the Gaussian uncorrelated observation noise (for an observation integration time of 60 s). We perform the analysis for a mission duration up to 5 years. It is shown that a Phobos Laser Ranging (PLR) can contribute to a better understanding of the Martian system, opening the possibility for improved determination of a variety of physical parameters of Mars and Phobos. The simulations show that the mission concept is especially suited for estimating Mars tidal deformation parameters, estimating degree 2 Love numbers with absolute uncertainties at the 10-2 to 10-4 level after 1 and 4 years, respectively and providing separate estimates for the Martian quality factors at Sun and Phobos-forced frequencies. The estimation of Phobos libration amplitudes and gravity field coefficients provides an estimate of Phobos' relative equatorial and polar moments of inertia with an absolute uncertainty of 10-4 and 10-7, respectively, after 1 year. The observation of Phobos tidal deformation will be able to differentiate between a rubble pile and monolithic interior within 2 years. For all parameters, systematic errors have a much stronger influence (per unit uncertainty) than the uncorrelated Gaussian observation noise. This indicates the need for the inclusion of systematic errors in simulation studies and special attention to the mitigation of these errors in mission and system design.
Influence of counting chamber type on CASA outcomes of equine semen analysis.
Hoogewijs, M K; de Vliegher, S P; Govaere, J L; de Schauwer, C; de Kruif, A; van Soom, A
2012-09-01
Sperm motility is considered to be one of the key features of semen analysis. Assessment of motility is frequently performed using computer-assisted sperm analysis (CASA). Nevertheless, no uniform standards are present to analyse a semen sample using CASA. We hypothesised that the type of counting chamber used might influence the results of analysis and aimed to study the effect of chamber type on estimated concentration and motility of an equine semen sample assessed using CASA. Commonly used disposable Leja chambers of different depths were compared with disposable and reusable ISAS chambers, a Makler chamber and a World Health Organization (WHO) motility slide. Motility parameters and concentrations obtained with CASA using these different chambers were analysed. The NucleoCounter was used as gold standard for determining concentration. Concentration and motility parameters were significantly influenced by the chamber type used. Using the NucleoCounter as the gold standard for determining concentration, the correlation coefficients were low for all of the various chambers evaluated, with the exception of the 12 µm deep Leja chamber. Filling a chamber by capillary forces resulted in a lower observed concentration and reduced motility parameters. All chambers evaluated in this study resulted in significant lower progressive motility than the WHO prepared slide, with the exception of the Makler chamber, which resulted in a slight, but statistically significant, increase in progressive motility estimates. Computer-assisted sperm analysis can only provide a rough estimate of sperm concentration and overestimation is likely when drop-filled slides with a coverslip are used. Motility estimates using CASA are highly influenced by the counting chamber; therefore, a complete description of the chamber type used should be provided in semen reports and in scientific articles. © 2011 EVJ Ltd.
Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea.
Li, Zhiming; Teng, Zhidong; Feng, Xiaomei; Li, Yingke; Zhang, Huiguo
2015-01-01
In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients.
NASA Astrophysics Data System (ADS)
Yu, Wenwu; Cao, Jinde
2007-09-01
Parameter identification of dynamical systems from time series has received increasing interest due to its wide applications in secure communication, pattern recognition, neural networks, and so on. Given the driving system, parameters can be estimated from the time series by using an adaptive control algorithm. Recently, it has been reported that for some stable systems, in which parameters are difficult to be identified [Li et al., Phys Lett. A 333, 269-270 (2004); Remark 5 in Yu and Cao, Physica A 375, 467-482 (2007); and Li et al., Chaos 17, 038101 (2007)], and in this paper, a brief discussion about whether parameters can be identified from time series is investigated. From some detailed analyses, the problem of why parameters of stable systems can be hardly estimated is discussed. Some interesting examples are drawn to verify the proposed analysis.
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye
2016-02-01
Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The impact of variation in scaling factors on the estimation of ...
Many physiologically based pharmacokinetic (PBPK) models include values for metabolic rate parameters extrapolated from in vitro metabolism studies using scaling factors such as mg of microsomal protein per gram of liver (MPPGL) and liver mass (FVL). Variation in scaling factor values impacts metabolic rate parameter estimates (Vmax) and hence estimates of internal dose used in dose response analysis. The impacts of adult human variation in MPPGL and FVL on estimates of internal dose were assessed using a human PBPK model for BDCM for several internal dose metrics for two exposure scenarios (single 0.25 liter drink of water or 10 minute shower) under plausible (5 micrograms/L) and high level (20 micrograms/L) water concentrations. For both concentrations, all internal dose metrics were changed less than 5% for the showering scenario (combined inhalation and dermal exposure). In contrast, a 27-fold variation in area under the curve for BDCM in venous blood was observed at both oral exposure concentrations, whereas total amount of BDCM metabolized in liver was relatively unchanged. This analysis demonstrates that variability in the scaling factors used for in vitro to in vivo extrapolation (IVIVE) for metabolic rate parameters can have a significant route-dependent impact on estimates of internal dose under environmentally relevant exposure scenarios. This indicates the need to evaluate both uncertainty and variability for scaling factors used for IVIVE. Sca
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Effects of sampling close relatives on some elementary population genetics analyses.
Wang, Jinliang
2018-01-01
Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Coherent Power Analysis in Multilevel Studies Using Parameters from Surveys
ERIC Educational Resources Information Center
Rhoads, Christopher
2017-01-01
Researchers designing multisite and cluster randomized trials of educational interventions will usually conduct a power analysis in the planning stage of the study. To conduct the power analysis, researchers often use estimates of intracluster correlation coefficients and effect sizes derived from an analysis of survey data. When there is…
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.
NASA Instrument Cost/Schedule Model
NASA Technical Reports Server (NTRS)
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Gair, Jonathan R.
2014-12-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
Mixture distributions of wind speed in the UAE
NASA Astrophysics Data System (ADS)
Shin, J.; Ouarda, T.; Lee, T. S.
2013-12-01
Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Siqueira, Paul R.
2000-01-01
Parameters describing the vertical structure of forests, for example tree height, height-to-base-of-live-crown, underlying topography, and leaf area density, bear on land-surface, biogeochemical, and climate modeling efforts. Single, fixed-baseline interferometric synthetic aperture radar (INSAR) normalized cross-correlations constitute two observations from which to estimate forest vertical structure parameters: Cross-correlation amplitude and phase. Multialtitude INSAR observations increase the effective number of baselines potentially enabling the estimation of a larger set of vertical-structure parameters. Polarimetry and polarimetric interferometry can further extend the observation set. This paper describes the first acquisition of multialtitude INSAR for the purpose of estimating the parameters describing a vegetated land surface. These data were collected over ponderosa pine in central Oregon near longitude and latitude -121 37 25 and 44 29 56. The JPL interferometric TOPSAR system was flown at the standard 8-km altitude, and also at 4-km and 2-km altitudes, in a race track. A reference line including the above coordinates was maintained at 35 deg for both the north-east heading and the return southwest heading, at all altitudes. In addition to the three altitudes for interferometry, one line was flown with full zero-baseline polarimetry at the 8-km altitude. A preliminary analysis of part of the data collected suggests that they are consistent with one of two physical models describing the vegetation: 1) a single-layer, randomly oriented forest volume with a very strong ground return or 2) a multilayered randomly oriented volume; a homogeneous, single-layer model with no ground return cannot account for the multialtitude correlation amplitudes. Below the inconsistency of the data with a single-layer model is followed by analysis scenarios which include either the ground or a layered structure. The ground returns suggested by this preliminary analysis seem too strong to be plausible, but parameters describing a two-layer compare reasonably well to a field-measured probability distribution of tree heights in the area.
Comment on "High resolution coherence analysis between planetary and climate oscillations"
NASA Astrophysics Data System (ADS)
Holm, Sverre
2018-07-01
The paper by Scafetta entitled "High resolution coherence analysis between planetary and climate oscillations", May 2016 claims coherence between planetary movements and the global temperature anomaly. The claim is based on data analysis using the canonical covariance analysis (CCA) estimator for the magnitude squared coherence (MSC). It assumes a model with a predetermined number of sinusoids for the climate data. The results are highly dependent on this prior assumption, and may therefore be criticized for being based on the opposite of a null hypothesis. More importantly, since values of key parameters in the CCA method are not given, some experiments have been performed using the software of the original authors of the CCA estimator. The purpose was to replicate the results of Scafetta using what was perceived to be the most probable parameter values. Despite best efforts, this was not possible.
Crystal-Chemical Analysis of Soil at Rocknest, Gale Crater
NASA Technical Reports Server (NTRS)
Morrison, S. M.; Downs, R. T.; Blake, D. F.; Bish, D. L.; Ming, D. W.; Morris, R. V.; Yen, A. S.; Chipera, S. J.; Treiman, A. H.; Vaniman, D. T.;
2013-01-01
The CheMin instrument on the Mars Science Laboratory rover Curiosity performed X-ray diffraction analysis on Martian soil [1] at Rocknest in Gale Crater. In particular, crystalline phases from scoop 5 were identified and analyzed with the Rietveld method [2]. Refined unit-cell parameters are reported in Table 1. Comparing these unit-cell parameters with those in the literature provides an estimate of the chemical composition of the crystalline phases. For instance, Fig. 1 shows the Mg-content of Fa-Fo olivine as a function of the b unit-cell parameter using literature data. Our refined b parameter is indicated by the black triangle.
Challenges of Developing Design Discharge Estimates with Uncertain Data and Information
NASA Astrophysics Data System (ADS)
Senarath, S. U. S.
2016-12-01
This study focuses on design discharge estimates obtained for gauged basins through flood flow frequency analysis. Bulletin 17B (B17B) guidelines are widely used in the USA for developing these design estimates, which are required for many water resources engineering design applications. A set of outlier and historical data, and distribution parameter selection options is included in these guidelines. These options are provided in the guidelines as a means of accounting for uncertain data and information, primarily in the flow record. The individual as well as the cumulative effects of each of these preferences on design discharge estimates are evaluated in this study by using data from several gauges that are part of the United States Geological Survey's Hydro-Climatic Data Network. The results of this study show that despite the availability of rigorous and detailed guidelines for flood frequency analysis, the design discharge estimates can still vary substantially, from user to user, based on data and model parameter selection options chosen by each user. Thus, the findings of this study have strong implications for water resources engineers and other professionals who use B17B-based design discharge estimates in their work.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
NASA Astrophysics Data System (ADS)
Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.
2011-09-01
Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Takane, Yoshio
2004-01-01
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.
Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586
Ahearn, Elizabeth A.
2004-01-01
Multiple linear-regression equations were developed to estimate the magnitudes of floods in Connecticut for recurrence intervals ranging from 2 to 500 years. The equations can be used for nonurban, unregulated stream sites in Connecticut with drainage areas ranging from about 2 to 715 square miles. Flood-frequency data and hydrologic characteristics from 70 streamflow-gaging stations and the upstream drainage basins were used to develop the equations. The hydrologic characteristics?drainage area, mean basin elevation, and 24-hour rainfall?are used in the equations to estimate the magnitude of floods. Average standard errors of prediction for the equations are 31.8, 32.7, 34.4, 35.9, 37.6 and 45.0 percent for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals, respectively. Simplified equations using only one hydrologic characteristic?drainage area?also were developed. The regression analysis is based on generalized least-squares regression techniques. Observed flows (log-Pearson Type III analysis of the annual maximum flows) from five streamflow-gaging stations in urban basins in Connecticut were compared to flows estimated from national three-parameter and seven-parameter urban regression equations. The comparison shows that the three- and seven- parameter equations used in conjunction with the new statewide equations generally provide reasonable estimates of flood flows for urban sites in Connecticut, although a national urban flood-frequency study indicated that the three-parameter equations significantly underestimated flood flows in many regions of the country. Verification of the accuracy of the three-parameter or seven-parameter national regression equations using new data from Connecticut stations was beyond the scope of this study. A technique for calculating flood flows at streamflow-gaging stations using a weighted average also is described. Two estimates of flood flows?one estimate based on the log-Pearson Type III analyses of the annual maximum flows at the gaging station, and the other estimate from the regression equation?are weighted together based on the years of record at the gaging station and the equivalent years of record value determined from the regression. Weighted averages of flood flows for the 2-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are tabulated for the 70 streamflow-gaging stations used in the regression analysis. Generally, weighted averages give the most accurate estimate of flood flows at gaging stations. An evaluation of the Connecticut's streamflow-gaging network was performed to determine whether the spatial coverage and range of geographic and hydrologic conditions are adequately represented for transferring flood characteristics from gaged to ungaged sites. Fifty-one of 54 stations in the current (2004) network support one or more flood needs of federal, state, and local agencies. Twenty-five of 54 stations in the current network are considered high-priority stations by the U.S. Geological Survey because of their contribution to the longterm understanding of floods, and their application for regionalflood analysis. Enhancements to the network to improve overall effectiveness for regionalization can be made by increasing the spatial coverage of gaging stations, establishing stations in regions of the state that are not well-represented, and adding stations in basins with drainage area sizes not represented. Additionally, the usefulness of the network for characterizing floods can be maintained and improved by continuing operation at the current stations because flood flows can be more accurately estimated at stations with continuous, long-term record.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
An inverse problem for a mathematical model of aquaponic agriculture
NASA Astrophysics Data System (ADS)
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
Surrogate models for efficient stability analysis of brake systems
NASA Astrophysics Data System (ADS)
Nechak, Lyes; Gillot, Frédéric; Besset, Sébastien; Sinou, Jean-Jacques
2015-07-01
This study assesses capacities of the global sensitivity analysis combined together with the kriging formalism to be useful in the robust stability analysis of brake systems, which is too costly when performed with the classical complex eigenvalues analysis (CEA) based on finite element models (FEMs). By considering a simplified brake system, the global sensitivity analysis is first shown very helpful for understanding the effects of design parameters on the brake system's stability. This is allowed by the so-called Sobol indices which discriminate design parameters with respect to their influence on the stability. Consequently, only uncertainty of influent parameters is taken into account in the following step, namely, the surrogate modelling based on kriging. The latter is then demonstrated to be an interesting alternative to FEMs since it allowed, with a lower cost, an accurate estimation of the system's proportions of instability corresponding to the influent parameters.
NASA Technical Reports Server (NTRS)
Szalay, Alexander S.; Jain, Bhuvnesh; Matsubara, Takahiko; Scranton, Ryan; Vogeley, Michael S.; Connolly, Andrew; Dodelson, Scott; Eisenstein, Daniel; Frieman, Joshua A.; Gunn, James E.
2003-01-01
We present measurements of parameters of the three-dimensional power spectrum of galaxy clustering from 222 square degrees of early imaging data in the Sloan Digital Sky Survey (SDSS). The projected galaxy distribution on the sky is expanded over a set of Karhunen-Loeve (KL) eigenfunctions, which optimize the signal-to-noise ratio in our analysis. A maximum likelihood analysis is used to estimate parameters that set the shape and amplitude of the three-dimensional power spectrum of galaxies in the SDSS magnitude-limited sample with r* less than 21. Our best estimates are gamma = 0.188 +/- 0.04 and sigma(sub 8L) = 0.915 +/- 0.06 (statistical errors only), for a flat universe with a cosmological constant. We demonstrate that our measurements contain signal from scales at or beyond the peak of the three-dimensional power spectrum. We discuss how the results scale with systematic uncertainties, like the radial selection function. We find that the central values satisfy the analytically estimated scaling relation. We have also explored the effects of evolutionary corrections, various truncations of the KL basis, seeing, sample size, and limiting magnitude. We find that the impact of most of these uncertainties stay within the 2 sigma uncertainties of our fiducial result.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
Parameter identification of material constants in a composite shell structure
NASA Technical Reports Server (NTRS)
Martinez, David R.; Carne, Thomas G.
1988-01-01
One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
A comparison of three approaches to non-stationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.
2017-08-01
Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".
Extracting galactic structure parameters from multivariated density estimation
NASA Technical Reports Server (NTRS)
Chen, B.; Creze, M.; Robin, A.; Bienayme, O.
1992-01-01
Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.
NASA Astrophysics Data System (ADS)
Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula
2018-03-01
Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.
NASA Astrophysics Data System (ADS)
Królak, Andrzej; Trzaskoma, Pawel
1996-05-01
Application of wavelet analysis to the estimation of parameters of the broad-band gravitational-wave signal emitted by a binary system is investigated. A method of instantaneous frequency extraction first proposed in this context by Innocent and Vinet is used. The gravitational-wave signal from a binary is investigated from the point of view of signal analysis theory and it is shown that such a signal is characterized by a large time - bandwidth product. This property enables the extraction of frequency modulation from the wavelet transform of the signal. The wavelet transform of the chirp signal from a binary is calculated analytically. Numerical simulations with the noisy chirp signal are performed. The gravitational-wave signal from a binary is taken in the quadrupole approximation and it is buried in noise corresponding to three different values of the signal-to-noise ratio and the wavelet method to extract the frequency modulation of the signal is applied. Then, from the frequency modulation, the chirp mass parameter of the binary is estimated. It is found that the chirp mass can be estimated to a good accuracy, typically of the order of (20/0264-9381/13/5/006/img5% where 0264-9381/13/5/006/img6 is the optimal signal-to-noise ratio. It is also shown that the post-Newtonian effects in the gravitational wave signal from a binary can be discriminated to a satisfactory accuracy.
Onboard TDI stage estimation and calibration using SNR analysis
NASA Astrophysics Data System (ADS)
Haghshenas, Javad
2017-09-01
Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.
Use of the Kalman Filter for Aortic Pressure Waveform Noise Reduction
Lu, Hsiang-Wei; Wu, Chung-Che; Aliyazicioglu, Zekeriya; Kang, James S.
2017-01-01
Clinical applications that require extraction and interpretation of physiological signals or waveforms are susceptible to corruption by noise or artifacts. Real-time hemodynamic monitoring systems are important for clinicians to assess the hemodynamic stability of surgical or intensive care patients by interpreting hemodynamic parameters generated by an analysis of aortic blood pressure (ABP) waveform measurements. Since hemodynamic parameter estimation algorithms often detect events and features from measured ABP waveforms to generate hemodynamic parameters, noise and artifacts integrated into ABP waveforms can severely distort the interpretation of hemodynamic parameters by hemodynamic algorithms. In this article, we propose the use of the Kalman filter and the 4-element Windkessel model with static parameters, arterial compliance C, peripheral resistance R, aortic impedance r, and the inertia of blood L, to represent aortic circulation for generating accurate estimations of ABP waveforms through noise and artifact reduction. Results show the Kalman filter could very effectively eliminate noise and generate a good estimation from the noisy ABP waveform based on the past state history. The power spectrum of the measured ABP waveform and the synthesized ABP waveform shows two similar harmonic frequencies. PMID:28611850
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
NASA Astrophysics Data System (ADS)
Huang, D.; Wang, G.
2014-12-01
Stochastic simulation of spatially distributed ground-motion time histories is important for performance-based earthquake design of geographically distributed systems. In this study, we develop a novel technique to stochastically simulate regionalized ground-motion time histories using wavelet packet analysis. First, a transient acceleration time history is characterized by wavelet-packet parameters proposed by Yamamoto and Baker (2013). The wavelet-packet parameters fully characterize ground-motion time histories in terms of energy content, time- frequency-domain characteristics and time-frequency nonstationarity. This study further investigates the spatial cross-correlations of wavelet-packet parameters based on geostatistical analysis of 1500 regionalized ground motion data from eight well-recorded earthquakes in California, Mexico, Japan and Taiwan. The linear model of coregionalization (LMC) is used to develop a permissible spatial cross-correlation model for each parameter group. The geostatistical analysis of ground-motion data from different regions reveals significant dependence of the LMC structure on regional site conditions, which can be characterized by the correlation range of Vs30 in each region. In general, the spatial correlation and cross-correlation of wavelet-packet parameters are stronger if the site condition is more homogeneous. Using the regional-specific spatial cross-correlation model and cokriging technique, wavelet packet parameters at unmeasured locations can be best estimated, and regionalized ground-motion time histories can be synthesized. Case studies and blind tests demonstrated that the simulated ground motions generally agree well with the actual recorded data, if the influence of regional-site conditions is considered. The developed method has great potential to be used in computational-based seismic analysis and loss estimation in a regional scale.
El Allaki, Farouk; Harrington, Noel; Howden, Krista
2016-11-01
The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks
NASA Astrophysics Data System (ADS)
Shi, Ying; Jian, Shaoyong
2018-03-01
an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Estimating Arrhenius parameters using temperature programmed molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight variousmore » aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.« less
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
Benke, Timothy A; Lüthi, Andreas; Palmer, Mary J; Wikström, Martin A; Anderson, William W; Isaac, John T R; Collingridge, Graham L
2001-01-01
The molecular properties of synaptic α-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) receptors are an important factor determining excitatory synaptic transmission in the brain. Changes in the number (N) or single-channel conductance (γ) of functional AMPA receptors may underlie synaptic plasticity, such as long-term potentiation (LTP) and long-term depression (LTD). These parameters have been estimated using non-stationary fluctuation analysis (NSFA). The validity of NSFA for studying the channel properties of synaptic AMPA receptors was assessed using a cable model with dendritic spines and a microscopic kinetic description of AMPA receptors. Electrotonic, geometric and kinetic parameters were altered in order to determine their effects on estimates of the underlying γ. Estimates of γ were very sensitive to the access resistance of the recording (RA) and the mean open time of AMPA channels. Estimates of γ were less sensitive to the distance between the electrode and the synaptic site, the electrotonic properties of dendritic structures, recording electrode capacitance and background noise. Estimates of γ were insensitive to changes in spine morphology, synaptic glutamate concentration and the peak open probability (Po) of AMPA receptors. The results obtained using the model agree with biological data, obtained from 91 dendritic recordings from rat CA1 pyramidal cells. A correlation analysis showed that RA resulted in a slowing of the decay time constant of excitatory postsynaptic currents (EPSCs) by approximately 150 %, from an estimated value of 3.1 ms. RA also greatly attenuated the absolute estimate of γ by approximately 50-70 %. When other parameters remain constant, the model demonstrates that NSFA of dendritic recordings can readily discriminate between changes in γvs. changes in N or Po. Neither background noise nor asynchronous activation of multiple synapses prevented reliable discrimination between changes in γ and changes in either N or Po. The model (available online) can be used to predict how changes in the different properties of AMPA receptors may influence synaptic transmission and plasticity. PMID:11731574
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2017-11-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
Sensitivity analysis of pars-tensa young's modulus estimation using inverse finite-element modeling
NASA Astrophysics Data System (ADS)
Rohani, S. Alireza; Elfarnawany, Mai; Agrawal, Sumit K.; Ladak, Hanif M.
2018-05-01
Accurate estimates of the pars-tensa (PT) Young's modulus (EPT) are required in finite-element (FE) modeling studies of the middle ear. Previously, we introduced an in-situ EPT estimation technique by optimizing a sample-specific FE model to match experimental eardrum pressurization data. This optimization process requires choosing some modeling assumptions such as PT thickness and boundary conditions. These assumptions are reported with a wide range of variation in the literature, hence affecting the reliability of the models. In addition, the sensitivity of the estimated EPT to FE modeling assumptions has not been studied. Therefore, the objective of this study is to identify the most influential modeling assumption on EPT estimates. The middle-ear cavity extracted from a cadaveric temporal bone was pressurized to 500 Pa. The deformed shape of the eardrum after pressurization was measured using a Fourier transform profilometer (FTP). A base-line FE model of the unpressurized middle ear was created. The EPT was estimated using golden section optimization method, which minimizes the cost function comparing the deformed FE model shape to the measured shape after pressurization. The effect of varying the modeling assumptions on EPT estimates were investigated. This included the change in PT thickness, pars flaccida Young's modulus and possible FTP measurement error. The most influential parameter on EPT estimation was PT thickness and the least influential parameter was pars flaccida Young's modulus. The results of this study provide insight into how different parameters affect the results of EPT optimization and which parameters' uncertainties require further investigation to develop robust estimation techniques.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
40 CFR 63.1414 - Test methods and emission estimation equations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...
Earth-Moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1979-01-01
The following topics are discussed: (1) the Unified Model of Lunar Translation/Rotation (UMLTR); (2) the effect of figure-figure interactions on lunar physical librations; (3) the effect of translational-rotational coupling on the lunar orbit; and(4) an error analysis for estimating lunar inertias from LURE (Lunar Laser Ranging Experiment) data.
40 CFR 63.1414 - Test methods and emission estimation equations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...
40 CFR 63.1414 - Test methods and emission estimation equations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters.... Engineering assessment may be used to estimate organic HAP emissions from a batch emission episode only under... (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6)(ii) of this...
40 CFR 63.1414 - Test methods and emission estimation equations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... paragraph (d)(5) of this section. Engineering assessment may be used to estimate organic HAP emissions from... defined in paragraph (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6...
40 CFR 63.1414 - Test methods and emission estimation equations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... paragraph (d)(5) of this section. Engineering assessment may be used to estimate organic HAP emissions from... defined in paragraph (d)(5) of this section; through engineering assessment, as defined in paragraph (d)(6...
Sonography in Fetal Birth Weight Estimation
ERIC Educational Resources Information Center
Akinola, R. A.; Akinola, O. I.; Oyekan, O. O.
2009-01-01
The estimation of fetal birth weight is an important factor in the management of high risk pregnancies. The information and knowledge gained through this study, comparing a combination of various fetal parameters using computer assisted analysis, will help the obstetrician to screen the high risk pregnancies, monitor the growth and development,…
Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D
2012-01-01
Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084
Regression analysis of longitudinal data with correlated censoring and observation times.
Li, Yang; He, Xin; Wang, Haiying; Sun, Jianguo
2016-07-01
Longitudinal data occur in many fields such as the medical follow-up studies that involve repeated measurements. For their analysis, most existing approaches assume that the observation or follow-up times are independent of the response process either completely or given some covariates. In practice, it is apparent that this may not be true. In this paper, we present a joint analysis approach that allows the possible mutual correlations that can be characterized by time-dependent random effects. Estimating equations are developed for the parameter estimation and the resulted estimators are shown to be consistent and asymptotically normal. The finite sample performance of the proposed estimators is assessed through a simulation study and an illustrative example from a skin cancer study is provided.
Aslan, Alper; Destek, Mehmet Akif; Okumus, Ilyas
2018-01-01
This study aims to examine the validity of inverted U-shaped Environmental Kuznets Curve by investigating the relationship between economic growth and environmental pollution for the period from 1966 to 2013 in the USA. Previous studies based on the assumption of parameter stability and obtained parameters do not change over the full sample. This study uses bootstrap rolling window estimation method to detect the possible changes in causal relations and also obtain the parameters for sub-sample periods. The results show that the parameter of economic growth has increasing trend in 1982-1996 sub-sample periods, and it has decreasing trend in 1996-2013 sub-sample periods. Therefore, the existence of inverted U-shaped Environmental Kuznets Curve is confirmed in the USA.
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
2011-01-01
Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614
Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy
2011-08-19
Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less