Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
NASA Astrophysics Data System (ADS)
Camacho Suarez, V. V.; Shucksmith, J.; Schellart, A.
2016-12-01
Analytical and numerical models can be used to represent the advection-dispersion processes governing the transport of pollutants in rivers (Fan et al., 2015; Van Genuchten et al., 2013). Simplifications, assumptions and parameter estimations in these models result in various uncertainties within the modelling process and estimations of pollutant concentrations. In this study, we explore both: 1) the structural uncertainty due to the one dimensional simplification of the Advection Dispersion Equation (ADE) and 2) the parameter uncertainty due to the semi empirical estimation of the longitudinal dispersion coefficient. The relative significance of these uncertainties has not previously been examined. By analysing both the relative structural uncertainty of analytical solutions of the ADE, and the parameter uncertainty due to the longitudinal dispersion coefficient via a Monte Carlo analysis, an evaluation of the dominant uncertainties for a case study in the river Chillan, Chile is presented over a range of spatial scales.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Quantifying lost information due to covariance matrix estimation in parameter inference
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2017-02-01
Parameter inference with an estimated covariance matrix systematically loses information due to the remaining uncertainty of the covariance matrix. Here, we quantify this loss of precision and develop a framework to hypothetically restore it, which allows to judge how far away a given analysis is from the ideal case of a known covariance matrix. We point out that it is insufficient to estimate this loss by debiasing the Fisher matrix as previously done, due to a fundamental inequality that describes how biases arise in non-linear functions. We therefore develop direct estimators for parameter credibility contours and the figure of merit, finding that significantly fewer simulations than previously thought are sufficient to reach satisfactory precisions. We apply our results to DES Science Verification weak lensing data, detecting a 10 per cent loss of information that increases their credibility contours. No significant loss of information is found for KiDS. For a Euclid-like survey, with about 10 nuisance parameters we find that 2900 simulations are sufficient to limit the systematically lost information to 1 per cent, with an additional uncertainty of about 2 per cent. Without any nuisance parameters, 1900 simulations are sufficient to only lose 1 per cent of information. We further derive estimators for all quantities needed for forecasting with estimated covariance matrices. Our formalism allows to determine the sweetspot between running sophisticated simulations to reduce the number of nuisance parameters, and running as many fast simulations as possible.
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2012-01-01
Fraction of missing information [lambda][subscript j] is a useful measure of the impact of missing data on the quality of estimation of a particular parameter. This measure can be computed for all parameters in the model, and it communicates the relative loss of efficiency in the estimation of a particular parameter due to missing data. It has…
Monaural room acoustic parameters from music and speech.
Kendrick, Paul; Cox, Trevor J; Li, Francis F; Zhang, Yonggang; Chambers, Jonathon A
2008-07-01
This paper compares two methods for extracting room acoustic parameters from reverberated speech and music. An approach which uses statistical machine learning, previously developed for speech, is extended to work with music. For speech, reverberation time estimations are within a perceptual difference limen of the true value. For music, virtually all early decay time estimations are within a difference limen of the true value. The estimation accuracy is not good enough in other cases due to differences between the simulated data set used to develop the empirical model and real rooms. The second method carries out a maximum likelihood estimation on decay phases at the end of notes or speech utterances. This paper extends the method to estimate parameters relating to the balance of early and late energies in the impulse response. For reverberation time and speech, the method provides estimations which are within the perceptual difference limen of the true value. For other parameters such as clarity, the estimations are not sufficiently accurate due to the natural reverberance of the excitation signals. Speech is a better test signal than music because of the greater periods of silence in the signal, although music is needed for low frequency measurement.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
NASA Astrophysics Data System (ADS)
Harpold, R. E.; Urban, T. J.; Schutz, B. E.
2008-12-01
Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Statistical Constraints on Station Clock Parameters in the NRCAN PPP Estimation Process
2008-12-01
e.g., Two-Way Satellite Time and Frequency Transfer ( TWSTFT ), GPS Common View (CV), and GPS P3 [9]. Finally, PPP shows a 2- times improvement in...the collocated Two-Way Satellite Time and Frequency Technique ( TWSTFT ) estimates for the same baseline. The TWSTFT estimates are available every 2...periodicity is due to the thermal variations described in the previous section, while the divergence between both PPP solutions and TWSTFT estimates is due
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.
2018-04-01
Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.
Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto
2011-07-15
Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Westgate, Philip M.
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539
Westgate, Philip M
2016-01-01
When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.
Estimating the Non-Monetary Burden of Neurocysticercosis in Mexico
Bhattarai, Rachana; Budke, Christine M.; Carabin, Hélène; Proaño, Jefferson V.; Flores-Rivera, Jose; Corona, Teresa; Ivanek, Renata; Snowden, Karen F.; Flisser, Ana
2012-01-01
Background Neurocysticercosis (NCC) is a major public health problem in many developing countries where health education, sanitation, and meat inspection infrastructure are insufficient. The condition occurs when humans ingest eggs of the pork tapeworm Taenia solium, which then develop into larvae in the central nervous system. Although NCC is endemic in many areas of the world and is associated with considerable socio-economic losses, the burden of NCC remains largely unknown. This study provides the first estimate of disability adjusted life years (DALYs) associated with NCC in Mexico. Methods DALYs lost for symptomatic cases of NCC in Mexico were estimated by incorporating morbidity and mortality due to NCC-associated epilepsy, and morbidity due to NCC-associated severe chronic headaches. Latin hypercube sampling methods were employed to sample the distributions of uncertain parameters and to estimate 95% credible regions (95% CRs). Findings In Mexico, 144,433 and 98,520 individuals are estimated to suffer from NCC-associated epilepsy and NCC-associated severe chronic headaches, respectively. A total of 25,341 (95% CR: 12,569–46,640) DALYs were estimated to be lost due to these clinical manifestations, with 0.25 (95% CR: 0.12–0.46) DALY lost per 1,000 person-years of which 90% was due to NCC-associated epilepsy. Conclusion This is the first estimate of DALYs associated with NCC in Mexico. However, this value is likely to be underestimated since only the clinical manifestations of epilepsy and severe chronic headaches were included. In addition, due to limited country specific data, some parameters used in the analysis were based on systematic reviews of the literature or primary research from other geographic locations. Even with these limitations, our estimates suggest that healthy years of life are being lost due to NCC in Mexico. PMID:22363827
Genetic parameter estimation for pre- and post-weaning traits in Brahman cattle in Brazil.
Vargas, Giovana; Buzanskas, Marcos Eli; Guidolin, Diego Gomes Freire; Grossi, Daniela do Amaral; Bonifácio, Alexandre da Silva; Lôbo, Raysildo Barbosa; da Fonseca, Ricardo; Oliveira, João Ademir de; Munari, Danísio Prado
2014-10-01
Beef cattle producers in Brazil use body weight traits as breeding program selection criteria due to their great economic importance. The objectives of this study were to evaluate different animal models, estimate genetic parameters, and define the most fitting model for Brahman cattle body weight standardized at 120 (BW120), 210 (BW210), 365 (BW365), 450 (BW450), and 550 (BW550) days of age. To estimate genetic parameters, single-, two-, and multi-trait analyses were performed using the animal model. The likelihood ratio test was verified between all models. For BW120 and BW210, additive direct genetic, maternal genetic, maternal permanent environment, and residual effects were considered, while for BW365 and BW450, additive direct genetic, maternal genetic, and residual effects were considered. Finally, for BW550, additive direct genetic and residual effects were considered. Estimates of direct heritability for BW120 were similar in all analyses; however, for the other traits, multi-trait analysis resulted in higher estimates. The maternal heritability and proportion of maternal permanent environmental variance to total variance were minimal in multi-trait analyses. Genetic, environmental, and phenotypic correlations were of high magnitude between all traits. Multi-trait analyses would aid in the parameter estimation for body weight at older ages because they are usually affected by a lower number of animals with phenotypic information due to culling and mortality.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments
Erkoc, Ali; Emiroglu, Esra
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738
Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.
Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas
2014-01-01
In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-15
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong
2013-09-01
This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
USDA-ARS?s Scientific Manuscript database
Classic rainfall-runoff models usually use historical data to estimate model parameters and mean values of parameters are considered for predictions. However, due to climate changes and human effects, the parameters of model change temporally. To overcome this problem, Normalized Difference Vegetati...
Pattern statistics on Markov chains and sensitivity to parameter estimation
Nuel, Grégory
2006-01-01
Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916
Pattern statistics on Markov chains and sensitivity to parameter estimation.
Nuel, Grégory
2006-10-17
In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Lagishetty, Chakradhar V; Duffull, Stephen B
2015-11-01
Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.
Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash
2014-03-01
One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.
2005-01-01
An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine.
Howard, Jeremy T; Ashwell, Melissa S; Baynes, Ronald E; Brooks, James D; Yeatts, James L; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs ( n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study.
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine
Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study. PMID:29487615
NASA Astrophysics Data System (ADS)
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.
The estimation of material and patch parameters in a PDE-based circular plate model
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.
1995-01-01
The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
NASA Astrophysics Data System (ADS)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-01
In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Estimating the cost of production stoppage
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1979-01-01
Estimation model considers learning curve quantities, and time of break to forecast losses due to break in production schedule. Major parameters capable of predicting costs are number of units made prior to production sequence, length of production break, and slope of learning curve produced prior to break.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.
Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth
2017-05-26
In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Bravo, Hector R.; Jiang, Feng; Hunt, Randall J.
2002-01-01
Parameter estimation is a powerful way to calibrate models. While head data alone are often insufficient to estimate unique parameters due to model nonuniqueness, flow‐and‐heat‐transport modeling can constrain estimation and allow simultaneous estimation of boundary fluxes and hydraulic conductivity. In this work, synthetic and field models that did not converge when head data were used did converge when head and temperature were used. Furthermore, frequency domain analyses of head and temperature data allowed selection of appropriate modeling timescales. Inflows in the Wilton, Wisconsin, wetlands could be estimated over periods such as a growing season and over periods of a few days when heads were nearly steady and groundwater temperature varied during the day. While this methodology is computationally more demanding than traditional head calibration, the results gained are unobtainable using the traditional approach. These results suggest that temperature can efficiently supplement head data in systems where accurate flux calibration targets are unavailable.
Estimation of improved resolution soil moisture in vegetated areas using passive AMSR-E data
NASA Astrophysics Data System (ADS)
Moradizadeh, Mina; Saradjian, Mohammad R.
2018-03-01
Microwave remote sensing provides a unique capability for soil parameter retrievals. Therefore, various soil parameters estimation models have been developed using brightness temperature (BT) measured by passive microwave sensors. Due to the low resolution of satellite microwave radiometer data, the main goal of this study is to develop a downscaling approach to improve the spatial resolution of soil moisture estimates with the use of higher resolution visible/infrared sensor data. Accordingly, after the soil parameters have been obtained using Simultaneous Land Parameters Retrieval Model algorithm, the downscaling method has been applied to the soil moisture estimations that have been validated against in situ soil moisture data. Advance Microwave Scanning Radiometer-EOS BT data in Soil Moisture Experiment 2003 region in the south and north of Oklahoma have been used to this end. Results illustrated that the soil moisture variability is effectively captured at 5 km spatial scales without a significant degradation of the accuracy.
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
NASA Technical Reports Server (NTRS)
Preisig, Joseph Richard Mark
1988-01-01
A Kalman filter was designed to yield optimal estimates of geophysical parameters from Very Long Baseline Interferometry (VLBI) group delay data. The geophysical parameters are the polar motion components, adjustments to nutation in obliquity and longitude, and a change in the length of day parameter. The VLBI clock (and clock rate) parameters and atmospheric zenith delay parameters are estimated simultaneously. Filter background is explained. The IRIS (International Radio Interferometric Surveying) VLBI data are Kalman filtered. The resulting polar motion estimates are examined. There are polar motion signatures at the times of three large earthquakes occurring in 1984 to 1986: Mexico, 19 September, 1985 (Magnitude M sub s = 8.1); Chile, 3 March, 1985 (M sub s = 7.8); and Taiwan, 14 November, 1986 (M sub s = 7.8). Breaks in polar motion occurring about 20 days after the earthquakes appear to correlate well with the onset of increased regional seismic activity and a return to more normal seismicity (respectively). While the contribution of these three earthquakes to polar motion excitations is small, the cumulative excitation due to earthquakes, or seismic phenomena over a Chandler wobble damping period may be significant. Mechanisms for polar motion excitation due to solid earth phenomena are examined. Excitation functions are computed, but the data spans are too short to draw conclusions based on these data.
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
The insight into the dark side - I. The pitfalls of the dark halo parameters estimation
NASA Astrophysics Data System (ADS)
Saburova, Anna S.; Kasparova, Anastasia V.; Katkov, Ivan Yu.
2016-12-01
We examined the reliability of estimates of pseudo-isothermal, Burkert and NFW dark halo parameters for the methods based on the mass-modelling of the rotation curves. To do it, we constructed the χ2 maps for the grid of the dark matter halo parameters for a sample of 14 disc galaxies with high-quality rotation curves from THINGS. We considered two variants of models in which: (a) the mass-to-light ratios of disc and bulge were taken as free parameters, (b) the mass-to-light ratios were fixed in a narrow range according to the models of stellar populations. To reproduce the possible observational features of the real galaxies, we made tests showing that the parameters of the three halo types change critically in the cases of a lack of kinematic data in the central or peripheral areas and for different spatial resolutions. We showed that due to the degeneracy between the central densities and the radial scales of the dark haloes there are considerable uncertainties of their concentrations estimates. Due to this reason, it is also impossible to draw any firm conclusion about universality of the dark halo column density based on mass-modelling of even a high-quality rotation curve. The problem is not solved by fixing the density of baryonic matter. In contrast, the estimates of dark halo mass within optical radius are much more reliable. We demonstrated that one can evaluate successfully the halo mass using the pure best-fitting method without any restrictions on the mass-to-light ratios.
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
NASA Astrophysics Data System (ADS)
Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.
2015-11-01
Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
A mechanistic modeling and data assimilation framework for Mojave Desert ecohydrology
Ng, Gene-Hua Crystal.; Bedford, David; Miller, David
2014-01-01
This study demonstrates and addresses challenges in coupled ecohydrological modeling in deserts, which arise due to unique plant adaptations, marginal growing conditions, slow net primary production rates, and highly variable rainfall. We consider model uncertainty from both structural and parameter errors and present a mechanistic model for the shrub Larrea tridentata (creosote bush) under conditions found in the Mojave National Preserve in southeastern California (USA). Desert-specific plant and soil features are incorporated into the CLM-CN model by Oleson et al. (2010). We then develop a data assimilation framework using the ensemble Kalman filter (EnKF) to estimate model parameters based on soil moisture and leaf-area index observations. A new implementation procedure, the “multisite loop EnKF,” tackles parameter estimation difficulties found to affect desert ecohydrological applications. Specifically, the procedure iterates through data from various observation sites to alleviate adverse filter impacts from non-Gaussianity in small desert vegetation state values. It also readjusts inconsistent parameters and states through a model spin-up step that accounts for longer dynamical time scales due to infrequent rainfall in deserts. Observation error variance inflation may also be needed to help prevent divergence of estimates from true values. Synthetic test results highlight the importance of adequate observations for reducing model uncertainty, which can be achieved through data quality or quantity.
USDA-ARS?s Scientific Manuscript database
Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...
Unidimensional Interpretations for Multidimensional Test Items
ERIC Educational Resources Information Center
Kahraman, Nilufer
2013-01-01
This article considers potential problems that can arise in estimating a unidimensional item response theory (IRT) model when some test items are multidimensional (i.e., show a complex factorial structure). More specifically, this study examines (1) the consequences of model misfit on IRT item parameter estimates due to unintended minor item-level…
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2018-03-01
Chaos control and synchronization of chaotic systems is seemingly a challenging problem and has got a lot of attention in recent years due to its numerous applications in science and industry. This paper concentrates on the control and synchronization problem of the three-dimensional (3D) Zhang chaotic system. At first, an adaptive control law and a parameter estimation law are achieved for controlling the behavior of the Zhang chaotic system. Then, non-identical synchronization of Zhang chaotic system is provided with considering the Lü chaotic system as the follower system. The synchronization problem and parameters identification are achieved by introducing an adaptive control law and a parameters estimation law. Stability analysis of the proposed method is proved by the Lyapanov stability theorem. In addition, the convergence of the estimated parameters to their truly unknown values are evaluated. Finally, some numerical simulations are carried out to illustrate and to validate the effectiveness of the suggested method.
NASA Astrophysics Data System (ADS)
Paldor, N.; Berman, H.; Lazar, B.
2017-12-01
Uncertainties in quantitative estimates of the thermohaline circulation in any particular basin are large, partly due to large uncertainties in quantifying excess evaporation over precipitation and surface velocities. A single nondimensional parameter, γ=(qx)/(hu) is proposed to characterize the "strength" of the thermohaline circulation by combining the physical parameters of surface velocity (u), evaporation rate (q), mixed layer depth (h) and trajectory length (x). Values of g can be estimated directly from cross-sections of salinity or seawater isotopic composition (δ18O and δD). Estimates of q in the Red Sea and the South-West Indian Ocean are 0.1 and 0.02, respectively, which implies that the thermohaline contribution to the circulation in the former is higher than in the latter. Once the value of g has been determined in a particular basin, either q or u can be estimated from known values of the remaining parameters. In the studied basins such estimates are consistent with previous studies.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
Brinker, Tessa; Bijma, Piter; Visscher, Jeroen; Rodenburg, T Bas; Ellen, Esther D
2014-05-29
Feather pecking is a major welfare issue in laying hen industry that leads to mortality. Due to a ban on conventional cages in the EU and on beak trimming in some countries of the EU, feather pecking will become an even bigger problem. Its severity depends both on the victim receiving pecking and on its group mates inflicting pecking (indirect effects), which together determine plumage condition of the victim. Plumage condition may depend, therefore, on both the direct genetic effect of an individual itself and on the indirect genetic effects of its group mates. Here, we present estimated genetic parameters for direct and indirect effects on plumage condition of different body regions in two purebred layer lines, and estimates of genetic correlations between body regions. Feather condition scores (FCS) were recorded at 40 weeks of age for neck, back, rump and belly and these four scores were added-up into a total FCS. A classical animal model and a direct-indirect effects model were used to estimate genetic parameters for FCS. In addition, a bivariate model with mortality (0/1) was used to account for mortality before recording FCS. Due to mortality during the first 23 weeks of laying, 5363 (for W1) and 5089 (for WB) FCS records were available. Total heritable variance for FCS ranged from 1.5% to 9.8% and from 9.8% to 53.6% when estimated respectively with the classical animal and the direct-indirect effects model. The direct-indirect effects model had a significantly higher likelihood. In both lines, 70% to 94% of the estimated total heritable variation in FCS was due to indirect effects. Using bivariate analysis of FCS and mortality did not affect estimates of genetic parameters. Genetic correlations were high between adjacent regions for FCS on neck, back, and rump but moderate to low for belly with other regions. Our results show that 70% to 94% of the heritable variation in FCS relates to indirect effects, indicating that methods of genetic selection that include indirect genetic effects offer perspectives to improve plumage condition in laying hens. This, in turn could reduce a major welfare problem.
Comparison of Field Measurements to Methane Emissions Models at a New Landfill
Due to both technical and economic limitations, estimates of methane emissions from landfills rely primarily on models. While models are easy to implement, there is uncertainty due to the use of parameters that are difficult to validate. The objective of this research was to comp...
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
Kumar, B Shiva; Venkateswarlu, Ch
2014-08-01
The complex nature of biological reactions in biofilm reactors often poses difficulties in analyzing such reactors experimentally. Mathematical models could be very useful for their design and analysis. However, application of biofilm reactor models to practical problems proves somewhat ineffective due to the lack of knowledge of accurate kinetic models and uncertainty in model parameters. In this work, we propose an inverse modeling approach based on tabu search (TS) to estimate the parameters of kinetic and film thickness models. TS is used to estimate these parameters as a consequence of the validation of the mathematical models of the process with the aid of measured data obtained from an experimental fixed-bed anaerobic biofilm reactor involving the treatment of pharmaceutical industry wastewater. The results evaluated for different modeling configurations of varying degrees of complexity illustrate the effectiveness of TS for accurate estimation of kinetic and film thickness model parameters of the biofilm process. The results show that the two-dimensional mathematical model with Edward kinetics (with its optimum parameters as mu(max)rho(s)/Y = 24.57, Ks = 1.352 and Ki = 102.36) and three-parameter film thickness expression (with its estimated parameters as a = 0.289 x 10(-5), b = 1.55 x 10(-4) and c = 15.2 x 10(-6)) better describes the biofilm reactor treating the industry wastewater.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Estimating parameters of hidden Markov models based on marked individuals: use of robust design data
Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun
2012-01-01
Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
Yoshida, Keiichiro; Nishidate, Izumi; Ishizuka, Tomohiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2015-05-01
In order to estimate multispectral images of the absorption and scattering properties in the cerebral cortex of in vivo rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital RGB camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentrations of oxygenated hemoglobin and that of deoxygenated hemoglobin were estimated as the absorption parameters, whereas the coefficient a and the exponent b of the reduced scattering coefficient spectrum approximated by a power law function were estimated as the scattering parameters. The spectra of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters, and the spectral images of absorption and reduced scattering coefficients were then estimated. In order to confirm the feasibility of this method, we performed in vivo experiments on exposed rat brain. The estimated images of the absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. The changes in the estimated absorption and scattering parameters during normoxia, hyperoxia, and anoxia indicate the potential applicability of the method by which to evaluate the pathophysiological conditions of in vivo brain due to the loss of tissue viability.
NASA Astrophysics Data System (ADS)
Yu, Wenwu; Cao, Jinde
2007-09-01
Parameter identification of dynamical systems from time series has received increasing interest due to its wide applications in secure communication, pattern recognition, neural networks, and so on. Given the driving system, parameters can be estimated from the time series by using an adaptive control algorithm. Recently, it has been reported that for some stable systems, in which parameters are difficult to be identified [Li et al., Phys Lett. A 333, 269-270 (2004); Remark 5 in Yu and Cao, Physica A 375, 467-482 (2007); and Li et al., Chaos 17, 038101 (2007)], and in this paper, a brief discussion about whether parameters can be identified from time series is investigated. From some detailed analyses, the problem of why parameters of stable systems can be hardly estimated is discussed. Some interesting examples are drawn to verify the proposed analysis.
Spatially-explicit estimation of Wright's neighborhood size in continuous populations
Andrew J. Shirk; Samuel A. Cushman
2014-01-01
Effective population size (Ne) is an important parameter in conservation genetics because it quantifies a population's capacity to resist loss of genetic diversity due to inbreeding and drift. The classical approach to estimate Ne from genetic data involves grouping sampled individuals into discretely defined subpopulations assumed to be panmictic. Importantly,...
NASA Astrophysics Data System (ADS)
Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.
2017-09-01
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological-thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface-subsurface, deterministic-stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological-thermal dynamics.
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
A seasonal Bartlett-Lewis Rectangular Pulse model
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter
2016-04-01
Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.
NASA Astrophysics Data System (ADS)
Harvey, Richard Paul, III
Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.
A history of presatellite investigations of the earth's radiation budget
NASA Technical Reports Server (NTRS)
Hunt, G. E.; Kandel, R.; Mecherikunnel, A. T.
1986-01-01
The history of radiation budget studies from the early twentieth century to the advent of the space age is reviewed. By the beginning of the 1960's, accurate radiative models had been developed capable of estimating the global and zonally averaged components of the radiation budget, though great uncertainty in the derived parameters existed due to inaccuracy of the data describing the physical parameters used in the model, associated with clouds, the solar radiation, and the gaseous atmospheric absorbers. Over the century, the planetary albedo estimates had reduced from 89 to 30 percent.
NASA Technical Reports Server (NTRS)
Ocasio, W. C.; Rigney, D. R.; Clark, K. P.; Mark, R. G.; Goldberger, A. L. (Principal Investigator)
1993-01-01
We describe the theory and computer implementation of a newly-derived mathematical model for analyzing the shape of blood pressure waveforms. Input to the program consists of an ECG signal, plus a single continuous channel of peripheral blood pressure, which is often obtained invasively from an indwelling catheter during intensive-care monitoring or non-invasively from a tonometer. Output from the program includes a set of parameter estimates, made for every heart beat. Parameters of the model can be interpreted in terms of the capacitance of large arteries, the capacitance of peripheral arteries, the inertance of blood flow, the peripheral resistance, and arterial pressure due to basal vascular tone. Aortic flow due to contraction of the left ventricle is represented by a forcing function in the form of a descending ramp, the area under which represents the stroke volume. Differential equations describing the model are solved by the method of Laplace transforms, permitting rapid parameter estimation by the Levenberg-Marquardt algorithm. Parameter estimates and their confidence intervals are given in six examples, which are chosen to represent a variety of pressure waveforms that are observed during intensive-care monitoring. The examples demonstrate that some of the parameters may fluctuate markedly from beat to beat. Our program will find application in projects that are intended to correlate the details of the blood pressure waveform with other physiological variables, pathological conditions, and the effects of interventions.
Wave transport in the South Australian Basin
NASA Astrophysics Data System (ADS)
Bye, John A. T.; James, Charles
2018-02-01
The specification of the dynamics of the air-sea boundary layer is of fundamental importance to oceanography. There is a voluminous literature on the subject, however a strong link between the velocity profile due to waves and that due to turbulent processes in the wave boundary layer does not appear to have been established. Here we specify the velocity profile due to the wave field using the Toba spectrum, and the velocity profile due to turbulence at the sea surface by the net effect of slip and wave breaking in which slip is the dominant process. Under this specification, the inertial coupling of the two fluids for a constant viscosity Ekman layer yields two independent estimates for the frictional parameter (which is a function of the 10 m drag coefficient and the peak wave period) of the coupled system, one of which is due to the surface Ekman current and the other to the peak wave period. We show that the median values of these two estimates, evaluated from a ROMS simulation over the period 2011-2012 at a station on the Southern Shelf in the South Australian Basin, are similar in strong support of the air-sea boundary layer model. On integrating over the planetary boundary layer we obtain the Ekman transport (w*2/f) and the wave transport due to a truncated Toba spectrum (w*zB/κ) where w* is the friction velocity in water, f is the Coriolis parameter, κ is von Karman's constant and zB = g T2/8 π2 is the depth of wave influence in which g is the acceleration of gravity and T is the peak wave period. A comparison of daily estimates shows that the wave transports from the truncated Toba spectrum and from the SWAN spectral model are highly correlated (r = 0.82) and that on average the Toba estimates are about 86% of the SWAN estimates due to the omission of low frequency tails of the spectra, although for wave transports less than about 0.5 m2 s-1 the estimates are almost equal. In the South Australian Basin the Toba wave transport is on average about 42% of the Ekman transport.
Jespersen, Sune N.; Bjarkam, Carsten R.; Nyengaard, Jens R.; Chakravarty, M. Mallar; Hansen, Brian; Vosegaard, Thomas; Østergaard, Leif; Yablonskiy, Dmitriy; Nielsen, Niels Chr.; Vestergaard-Poulsen, Peter
2010-01-01
Due to its unique sensitivity to tissue microstructure, diffusion-weighted magnetic resonance imaging (MRI) has found many applications in clinical and fundamental science. With few exceptions, a more precise correspondence between physiological or biophysical properties and the obtained diffusion parameters remain uncertain due to lack of specificity. In this work, we address this problem by comparing diffusion parameters of a recently introduced model for water diffusion in brain matter to light microscopy and quantitative electron microscopy. Specifically, we compare diffusion model predictions of neurite density in rats to optical myelin staining intensity and stereological estimation of neurite volume fraction using electron microscopy. We find that the diffusion model describes data better and that its parameters show stronger correlation with optical and electron microscopy, and thus reflect myelinated neurite density better than the more frequently used diffusion tensor imaging (DTI) and cumulant expansion methods. Furthermore, the estimated neurite orientations capture dendritic architecture more faithfully than DTI diffusion ellipsoids. PMID:19732836
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; ...
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
[Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].
Lu, Yu; Li, Xiang-Ru; Yang, Tan
2014-11-01
It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex, [Fe/H]: 0.104 0 dex.
Estimation of line dimensions in 3D direct laser writing lithography
NASA Astrophysics Data System (ADS)
Guney, M. G.; Fedder, G. K.
2016-10-01
Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
The effects of clutter-rejection filtering on estimating weather spectrum parameters
NASA Technical Reports Server (NTRS)
Davis, W. T.
1989-01-01
The effects of clutter-rejection filtering on estimating the weather parameters from pulse Doppler radar measurement data are investigated. The pulse pair method of estimating the spectrum mean and spectrum width of the weather is emphasized. The loss of sensitivity, a measure of the signal power lost due to filtering, is also considered. A flexible software tool developed to investigate these effects is described. It allows for simulated weather radar data, in which the user specifies an underlying truncated Gaussian spectrum, as well as for externally generated data which may be real or simulated. The filter may be implemented in either the time or the frequency domain. The software tool is validated by comparing unfiltered spectrum mean and width estimates to their true values, and by reproducing previously published results. The effects on the weather parameter estimates using simulated weather-only data are evaluated for five filters: an ideal filter, two infinite impulse response filters, and two finite impulse response filters. Results considering external data, consisting of weather and clutter data, are evaluated on a range cell by range cell basis. Finally, it is shown theoretically and by computer simulation that a linear phase response is not required for a clutter rejection filter preceeding pulse-pair parameter estimation.
Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime
NASA Astrophysics Data System (ADS)
Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay
2017-06-01
We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.
Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime
NASA Astrophysics Data System (ADS)
Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay
2018-05-01
We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.
Cosmological perturbation effects on gravitational-wave luminosity distance estimates
NASA Astrophysics Data System (ADS)
Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Matarrese, Sabino
2018-06-01
Waveforms of gravitational waves provide information about a variety of parameters for the binary system merging. However, standard calculations have been performed assuming a FLRW universe with no perturbations. In reality this assumption should be dropped: we show that the inclusion of cosmological perturbations translates into corrections to the estimate of astrophysical parameters derived for the merging binary systems. We compute corrections to the estimate of the luminosity distance due to velocity, volume, lensing and gravitational potential effects. Our results show that the amplitude of the corrections will be negligible for current instruments, mildly important for experiments like the planned DECIGO, and very important for future ones such as the Big Bang Observer.
Improvements in aircraft extraction programs
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.; Maine, R. E.
1976-01-01
Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Applying spectral data analysis techniques to aquifer monitoring data in Belvoir Ranch, Wyoming
NASA Astrophysics Data System (ADS)
Gao, F.; He, S.; Zhang, Y.
2017-12-01
This study uses spectral data analysis techniques to estimate the hydraulic parameters from water level fluctuation due to tide effect and barometric effect. All water level data used in this study are collected in Belvoir Ranch, Wyoming. Tide effect can be not only observed in coastal areas, but also in inland confined aquifers. The force caused by changing positions of sun and moon affects not only ocean but also solid earth. The tide effect has an oscillatory pumping or injection sequence to the aquifer, and can be observed from dense water level monitoring. Belvoir Ranch data are collected once per hour, thus is dense enough to capture the tide effect. First, transforming de-trended data from temporal domain to frequency domain with Fourier transform method. Then, the storage coefficient can be estimated using Bredehoeft-Jacob model. After this, analyze the gain function, which expresses the amplification and attenuation of the output signal, and derive barometric efficiency. Next, find effective porosity with storage coefficient and barometric efficiency with Jacob's model. Finally, estimate aquifer transmissivity and hydraulic conductivity using Paul Hsieh's method. The estimated hydraulic parameters are compared with those from traditional pumping data estimation. This study proves that hydraulic parameter can be estimated by only analyze water level data in frequency domain. It has the advantages of low cost and environmental friendly, thus should be considered for future use of hydraulic parameter estimations.
A Note on the Specification of Error Structures in Latent Interaction Models
ERIC Educational Resources Information Center
Mao, Xiulin; Harring, Jeffrey R.; Hancock, Gregory R.
2015-01-01
Latent interaction models have motivated a great deal of methodological research, mainly in the area of estimating such models. Product-indicator methods have been shown to be competitive with other methods of estimation in terms of parameter bias and standard error accuracy, and their continued popularity in empirical studies is due, in part, to…
Robust estimation procedure in panel data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
TRUE MASSES OF RADIAL-VELOCITY EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Robert A., E-mail: rbrown@stsci.edu
We study the task of estimating the true masses of known radial-velocity (RV) exoplanets by means of direct astrometry on coronagraphic images to measure the apparent separation between exoplanet and host star. Initially, we assume perfect knowledge of the RV orbital parameters and that all errors are due to photon statistics. We construct design reference missions for four missions currently under study at NASA: EXO-S and WFIRST-S, with external star shades for starlight suppression, EXO-C and WFIRST-C, with internal coronagraphs. These DRMs reveal extreme scheduling constraints due to the combination of solar and anti-solar pointing restrictions, photometric and obscurational completeness,more » image blurring due to orbital motion, and the “nodal effect,” which is the independence of apparent separation and inclination when the planet crosses the plane of the sky through the host star. Next, we address the issue of nonzero uncertainties in RV orbital parameters by investigating their impact on the observations of 21 single-planet systems. Except for two—GJ 676 A b and 16 Cyg B b, which are observable only by the star-shade missions—we find that current uncertainties in orbital parameters generally prevent accurate, unbiased estimation of true planetary mass. For the coronagraphs, WFIRST-C and EXO-C, the most likely number of good estimators of true mass is currently zero. For the star shades, EXO-S and WFIRST-S, the most likely numbers of good estimators are three and four, respectively, including GJ 676 A b and 16 Cyg B b. We expect that uncertain orbital elements currently undermine all potential programs of direct imaging and spectroscopy of RV exoplanets.« less
Measurement of Scattering and Absorption Cross Sections of Dyed Microspheres
Gaigalas, Adolfas K; Choquette, Steven; Zhang, Yu-Zhong
2013-01-01
Measurements of absorbance and fluorescence emission were carried out on aqueous suspensions of polystyrene (PS) microspheres with a diameter of 2.5 µm using a spectrophotometer with an integrating sphere detector. The apparatus and the principles of measurements were described in our earlier publications. Microspheres with and without green BODIPY@ dye were measured. Placing the suspension inside an integrating sphere (IS) detector of the spectrophotometer yielded (after a correction for fluorescence emission) the absorbance (called A in the text) due to absorption by BODIPY@ dye inside the microsphere. An estimate of the absorbance due to scattering alone was obtained by subtracting the corrected BODIPY@ dye absorbance (A) from the measured absorbance of a suspension placed outside the IS detector (called A1 in the text). The absorption of the BODIPY@ dye inside the microsphere was analyzed using an imaginary index of refraction parameterized with three Gaussian-Lorentz functions. The Kramer-Kronig relation was used to estimate the contribution of the BODIPY@ dye to the real part of the microsphere index of refraction. The complex index of refraction, obtained from the analysis of A, was used to analyze the absorbance due to scattering ((A1- A) in the text). In practice, the analysis of the scattering absorbance, A1-A, and the absorbance, A, was carried out in an iterative manner. It was assumed that A depended primarily on the imaginary part of the microsphere index of refraction with the other parameters playing a secondary role. Therefore A was first analyzed using values of the other parameters obtained from a fit to the absorbance due to scattering, A1-A, with the imaginary part neglected. The imaginary part obtained from the analysis of A was then used to reanalyze A1-A, and obtain better estimates of the other parameters. After a few iterations, consistent estimates were obtained of the scattering and absorption cross sections in the wavelength region 300 nm to 800 nm. PMID:26401422
Wendell R. Haag
2009-01-01
There may be bias associated with markârecapture experiments used to estimate age and growth of freshwater mussels. Using subsets of a markârecapture dataset for Quadrula pustulosa, I examined how age and growth parameter estimates are affected by (i) the range and skew of the data and (ii) growth reduction due to handling. I compared predictions...
Multivariate meta-analysis with an increasing number of parameters
Boca, Simina M.; Pfeiffer, Ruth M.; Sampson, Joshua N.
2017-01-01
Summary Meta-analysis can average estimates of multiple parameters, such as a treatment’s effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between study variability, the loss of efficiency due to choosing random effects MVMA over fixed-effect MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for Non-Hodgkin Lymphoma. PMID:28195655
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
NASA Astrophysics Data System (ADS)
Moeys, J.; Larsbo, M.; Bergström, L.; Brown, C. D.; Coquet, Y.; Jarvis, N. J.
2012-07-01
Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedo)transfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved. Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42). Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = -0.26) due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72). Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is probably more important than the estimation of water flow parameters. Overall, the results are encouraging for the use of this modelling set-up to estimate pesticide leaching risks at the regional-scale, especially where the objective is to identify vulnerable soils and "source" areas of contamination.
Parameter identification of JONSWAP spectrum acquired by airborne LIDAR
NASA Astrophysics Data System (ADS)
Yu, Yang; Pei, Hailong; Xu, Chengzhong
2017-12-01
In this study, we developed the first linear Joint North Sea Wave Project (JONSWAP) spectrum (JS), which involves a transformation from the JS solution to the natural logarithmic scale. This transformation is convenient for defining the least squares function in terms of the scale and shape parameters. We identified these two wind-dependent parameters to better understand the wind effect on surface waves. Due to its efficiency and high-resolution, we employed the airborne Light Detection and Ranging (LIDAR) system for our measurements. Due to the lack of actual data, we simulated ocean waves in the MATLAB environment, which can be easily translated into industrial programming language. We utilized the Longuet-Higgin (LH) random-phase method to generate the time series of wave records and used the fast Fourier transform (FFT) technique to compute the power spectra density. After validating these procedures, we identified the JS parameters by minimizing the mean-square error of the target spectrum to that of the estimated spectrum obtained by FFT. We determined that the estimation error is relative to the amount of available wave record data. Finally, we found the inverse computation of wind factors (wind speed and wind fetch length) to be robust and sufficiently precise for wave forecasting.
NASA Astrophysics Data System (ADS)
Mahadevan, S.; Manojkumar, R.; Jayakumar, T.; Das, C. R.; Rao, B. P. C.
2016-06-01
17-4 PH (precipitation hardening) stainless steel is a soft martensitic stainless steel strengthened by aging at appropriate temperature for sufficient duration. Precipitation of copper particles in the martensitic matrix during aging causes coherency strains which improves the mechanical properties, namely hardness and strength of the matrix. The contributions to X-ray diffraction (XRD) profile broadening due to coherency strains caused by precipitation and crystallite size changes due to aging are separated and quantified using the modified Williamson-Hall approach. The estimated normalized mean square strain and crystallite size are used to explain the observed changes in hardness. Microstructural changes observed in secondary electron images are in qualitative agreement with crystallite size changes estimated from XRD profile analysis. The precipitation kinetics in the age-hardening regime and overaged regime are studied from hardness changes and they follow the Avrami kinetics and Wilson's model, respectively. In overaged condition, the hardness changes are linearly correlated to the tempering parameter (also known as Larson-Miller parameter). Similar linear variation is observed between the normalized mean square strain (determined from XRD line profile analysis) and the tempering parameter, in the incoherent regime which is beyond peak microstrain conditions.
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
NASA Astrophysics Data System (ADS)
Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah
2017-08-01
Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.
Application of physical parameter identification to finite-element models
NASA Technical Reports Server (NTRS)
Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.
1987-01-01
The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.
2017-09-06
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface–subsurface hydrological–thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon–climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological–thermal processes associated with annual freeze–thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets – including soil liquid watermore » content, temperature and electrical resistivity tomography (ERT) data – to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological–thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface–subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice–liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological–thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological–thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface–subsurface, deterministic–stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological–thermal dynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface–subsurface hydrological–thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon–climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological–thermal processes associated with annual freeze–thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets – including soil liquid watermore » content, temperature and electrical resistivity tomography (ERT) data – to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological–thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface–subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice–liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological–thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological–thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface–subsurface, deterministic–stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological–thermal dynamics.« less
Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, C.; Casentini, J.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vano-Vinuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügmann, B.; Campanelli, M.; Chu, T.; Clark, M.; Haas, R.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Pan, Y.; Röver, C.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration
2016-10-01
This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35-3+5 M⊙ and 3 0-4+3 M⊙ (where errors correspond to 90% symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate <0.65 and a secondary spin estimate <0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
Nonlinear Blind Compensation for Array Signal Processing Application
Ma, Hong; Jin, Jiang; Zhang, Hua
2018-01-01
Recently, nonlinear blind compensation technique has attracted growing attention in array signal processing application. However, due to the nonlinear distortion stemming from array receiver which consists of multi-channel radio frequency (RF) front-ends, it is too difficult to estimate the parameters of array signal accurately. A novel nonlinear blind compensation algorithm aims at the nonlinearity mitigation of array receiver and its spurious-free dynamic range (SFDR) improvement, which will be more precise to estimate the parameters of target signals such as their two-dimensional directions of arrival (2-D DOAs). Herein, the suggested method is designed as follows: the nonlinear model parameters of any channel of RF front-end are extracted to synchronously compensate the nonlinear distortion of the entire receiver. Furthermore, a verification experiment on the array signal from a uniform circular array (UCA) is adopted to testify the validity of our approach. The real-world experimental results show that the SFDR of the receiver is enhanced, leading to a significant improvement of the 2-D DOAs estimation performance for weak target signals. And these results demonstrate that our nonlinear blind compensation algorithm is effective to estimate the parameters of weak array signal in concomitance with strong jammers. PMID:29690571
Parameter estimation for groundwater models under uncertain irrigation data
Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Estimated value of insurance premium due to Citarum River flood by using Bayesian method
NASA Astrophysics Data System (ADS)
Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.
2018-03-01
Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.
Evaporation estimates from the Dead Sea and their implications on its water balance
NASA Astrophysics Data System (ADS)
Oroud, Ibrahim M.
2011-12-01
The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Beda, Alessandro; Güldner, Andreas; Carvalho, Alysson R; Zin, Walter Araujo; Carvalho, Nadja C; Huhle, Robert; Giannella-Neto, Antonio; Koch, Thea; de Abreu, Marcelo Gama
2014-01-01
Measuring esophageal pressure (Pes) using an air-filled balloon catheter (BC) is the common approach to estimate pleural pressure and related parameters. However, Pes is not routinely measured in mechanically ventilated patients, partly due to technical and practical limitations and difficulties. This study aimed at comparing the conventional BC with two alternative methods for Pes measurement, liquid-filled and air-filled catheters without balloon (LFC and AFC), during mechanical ventilation with and without spontaneous breathing activity. Seven female juvenile pigs (32-42 kg) were anesthetized, orotracheally intubated, and a bundle of an AFC, LFC, and BC was inserted in the esophagus. Controlled and assisted mechanical ventilation were applied with positive end-expiratory pressures of 5 and 15 cmH2O, and driving pressures of 10 and 20 cmH2O, in supine and lateral decubitus. Cardiogenic noise in BC tracings was much larger (up to 25% of total power of Pes signal) than in AFC and LFC (<3%). Lung and chest wall elastance, pressure-time product, inspiratory work of breathing, inspiratory change and end-expiratory value of transpulmonary pressure were estimated. The three catheters allowed detecting similar changes in these parameters between different ventilation settings. However, a non-negligible and significant bias between estimates from BC and those from AFC and LFC was observed in several instances. In anesthetized and mechanically ventilated pigs, the three catheters are equivalent when the aim is to detect changes in Pes and related parameters between different conditions, but possibly not when the absolute value of the estimated parameters is of paramount importance. Due to a better signal-to-noise ratio, and considering its practical advantages in terms of easier calibration and simpler acquisition setup, LFC may prove interesting for clinical use.
Estimation of channel parameters and background irradiance for free-space optical link.
Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk
2013-05-10
Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.
2017-04-01
The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, aremore » related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.« less
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
Adaptive control of bivalirudin in the cardiac intensive care unit.
Zhao, Qi; Edrich, Thomas; Paschalidis, Ioannis Ch
2015-02-01
Bivalirudin is a direct thrombin inhibitor used in the cardiac intensive care unit when heparin is contraindicated due to heparin-induced thrombocytopenia. Since it is not a commonly used drug, clinical experience with its dosing is sparse. In earlier work [1], we developed a dynamic system model that accurately predicts the effect of bivalirudin given dosage over time and patient physiological characteristics. This paper develops adaptive dosage controllers that regulate its effect to desired levels. To that end, and in the case that bivalirudin model parameters are available, we develop a Model Reference Control law. In the case that model parameters are unknown, an indirect Model Reference Adaptive Control scheme is applied to estimate model parameters first and then adapt the controller. Alternatively, direct Model Reference Adaptive Control is applied to adapt the controller directly without estimating model parameters first. Our algorithms are validated using actual patient data from a large hospital in the Boston area.
COMPARISON OF ORGAN DOSES IN HUMAN PHANTOMS: VARIATIONS DUE TO BODY SIZE AND POSTURE.
Feng, Xu; Xiang-Hong, Jia; Qian, Liu; Xue-Jun, Yu; Zhan-Chun, Pan; Chun-Xin, Yang
2017-04-20
Organ dose calculations performed using human phantoms can provide estimates of astronauts' health risks due to cosmic radiation. However, the characteristics of such phantoms strongly affect the estimation precision. To investigate organ dose variations with body size and posture in human phantoms, a non-uniform rational B-spline boundary surfaces model was constructed based on cryosection images. This model was used to establish four phantoms with different body size and posture parameters, whose organs parameters were changed simultaneously and which were voxelised with 4 × 4 × 4 mm3 resolution. Then, using Monte Carlo transport code, the organ doses caused by ≤500 MeV isotropic incident protons were calculated. The dose variations due to body size differences within a certain range were negligible, and the doses received in crouching and standing-up postures were similar. Therefore, a standard Chinese phantom could be established, and posture changes cannot effectively protect astronauts during solar particle events. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Estimation of Skidding Offered by Ackermann Mechanism
NASA Astrophysics Data System (ADS)
Rao, Are Padma; Venkatachalam, Rapur
2016-04-01
Steering for a four wheeler is being provided by Ackermann mechanism. Though it cannot always provide correct steering conditions, it is very popular because of its simple nature. A correct steering would avoid skidding of the tires, and thereby enhance their lives as the wear of the tires is reduced. In this paper it is intended to analyze Ackermann mechanism for its performance. A method of estimating skidding due to improper steering is proposed. Two parameters are identified using which the length of skidding can be estimated.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo; ...
2015-06-05
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less
Estimation of distances to stars with stellar parameters from LAMOST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less
Multivariate meta-analysis with an increasing number of parameters.
Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N
2017-05-01
Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma. © Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Estimating the k2 Tidal Gravity Love Number of Mars
NASA Technical Reports Server (NTRS)
Smith, David E.; Zuber, Maria; Torrence, Mark; Dunn, Peter
2003-01-01
Analysis of the orbits of spacecraft can be used to infer global tidal parameters. For Mars, the Mars Global Surveyor (MGS) spacecraft has been used to estimate the second degree Love number, k2 from the tracking DSN tracking Doppler and range data by several authors. Unfortunately, neither of the spacecraft presently in orbit are ideally suited to tidal recovery because they are in sun-synchronous orbits that vary only slightly in local time; and, further, the sub-solar location only varies by about 25 degrees in latitude. Never-the less respectable estimates of the k2 tide have been made by several authors. We present an updated solution of the degree 2 zonal Love number, compare with previous dues, and analyze the sensitivity of the solution to orbital parameters, spacecraft maneuvers, and solution methodology. Estimating the k2 Tidal Gravity Love Number of Mars.
Phase diagram and universality of the Lennard-Jones gas-liquid system.
Watanabe, Hiroshi; Ito, Nobuyasu; Hu, Chin-Kun
2012-05-28
The gas-liquid phase transition of the three-dimensional Lennard-Jones particles system is studied by molecular dynamics simulations. The gas and liquid densities in the coexisting state are determined with high accuracy. The critical point is determined by the block density analysis of the Binder parameter with the aid of the law of rectilinear diameter. From the critical behavior of the gas-liquid coexisting density, the critical exponent of the order parameter is estimated to be β = 0.3285(7). Surface tension is estimated from interface broadening behavior due to capillary waves. From the critical behavior of the surface tension, the critical exponent of the correlation length is estimated to be ν = 0.63(4). The obtained values of β and ν are consistent with those of the Ising universality class.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
NASA Astrophysics Data System (ADS)
Tamang, Sagar Kumar; Song, Wenjun; Fang, Xing; Vasconcelos, Jose; Anderson, J. Brian
2018-06-01
Estimating sediment deposition in a stream, a standard procedure for dealing with aggradation problem is complicated in an ungauged catchment due to the absence of necessary flow data. A serious aggradation problem within an ungauged catchment in Alabama, USA, blocked the conveyance of a bridge, reducing the clearance under the bridge from several feet to a couple of inches. A study of historical aerial imageries showed deforestation in the catchment by a significant amount over a period consistent with the first identification of the problem. To further diagnose the aggradation problem, due to the lack of any gauging stations, local rainfall, flow, and sediment measurements were attempted. However, due to the difficulty of installing an area-velocity sensor in an actively aggrading stream, the parameter transfer process for a hydrologic model was adopted to understand/estimate streamflow. Simulated discharge combined with erosion parameters of MUSLE (modified universal soil loss equation) helped in the estimation of sediment yield of the catchment. Sediment yield for the catchment showed a significant increase in recent years. A two-dimensional hydraulic model was developed at the bridge site to examine potential engineering strategies to wash sediments off and mitigate further aggradation. This study is to quantify the increase of sediment yield in an ungauged catchment due to land cover changes and other contributing factors and develop strategies and recommendations for preventing future aggradation in the vicinity of the bridge.
Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo
2011-10-11
We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.
2011-01-01
Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Hey, Jody; Nielsen, Rasmus
2004-01-01
The genetic study of diverging, closely related populations is required for basic questions on demography and speciation, as well as for biodiversity and conservation research. However, it is often unclear whether divergence is due simply to separation or whether populations have also experienced gene flow. These questions can be addressed with a full model of population separation with gene flow, by applying a Markov chain Monte Carlo method for estimating the posterior probability distribution of model parameters. We have generalized this method and made it applicable to data from multiple unlinked loci. These loci can vary in their modes of inheritance, and inheritance scalars can be implemented either as constants or as parameters to be estimated. By treating inheritance scalars as parameters it is also possible to address variation among loci in the impact via linkage of recurrent selective sweeps or background selection. These methods are applied to a large multilocus data set from Drosophila pseudoobscura and D. persimilis. The species are estimated to have diverged approximately 500,000 years ago. Several loci have nonzero estimates of gene flow since the initial separation of the species, with considerable variation in gene flow estimates among loci, in both directions between the species. PMID:15238526
Enhancing PTFs with remotely sensed data for multi-scale soil water retention estimation
NASA Astrophysics Data System (ADS)
Jana, Raghavendra B.; Mohanty, Binayak P.
2011-03-01
SummaryUse of remotely sensed data products in the earth science and water resources fields is growing due to increasingly easy availability of the data. Traditionally, pedotransfer functions (PTFs) employed for soil hydraulic parameter estimation from other easily available data have used basic soil texture and structure information as inputs. Inclusion of surrogate/supplementary data such as topography and vegetation information has shown some improvement in the PTF's ability to estimate more accurate soil hydraulic parameters. Artificial neural networks (ANNs) are a popular tool for PTF development, and are usually applied across matching spatial scales of inputs and outputs. However, different hydrologic, hydro-climatic, and contaminant transport models require input data at different scales, all of which may not be easily available from existing databases. In such a scenario, it becomes necessary to scale the soil hydraulic parameter values estimated by PTFs to suit the model requirements. Also, uncertainties in the predictions need to be quantified to enable users to gauge the suitability of a particular dataset in their applications. Bayesian Neural Networks (BNNs) inherently provide uncertainty estimates for their outputs due to their utilization of Markov Chain Monte Carlo (MCMC) techniques. In this paper, we present a PTF methodology to estimate soil water retention characteristics built on a Bayesian framework for training of neural networks and utilizing several in situ and remotely sensed datasets jointly. The BNN is also applied across spatial scales to provide fine scale outputs when trained with coarse scale data. Our training data inputs include ground/remotely sensed soil texture, bulk density, elevation, and Leaf Area Index (LAI) at 1 km resolutions, while similar properties measured at a point scale are used as fine scale inputs. The methodology was tested at two different hydro-climatic regions. We also tested the effect of varying the support scale of the training data for the BNNs by sequentially aggregating finer resolution training data to coarser resolutions, and the applicability of the technique to upscaling problems. The BNN outputs are corrected for bias using a non-linear CDF-matching technique. Final results show good promise of the suitability of this Bayesian Neural Network approach for soil hydraulic parameter estimation across spatial scales using ground-, air-, or space-based remotely sensed geophysical parameters. Inclusion of remotely sensed data such as elevation and LAI in addition to in situ soil physical properties improved the estimation capabilities of the BNN-based PTF in certain conditions.
Systematic effects in LOD from SLR observations
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst
2014-09-01
Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu
2014-03-01
We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.
Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model
NASA Technical Reports Server (NTRS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.;
2016-01-01
This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35(+5)(-3) solar M; and 30(+3)(-4) solar M; (where errors correspond to 90 symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate is less than 0.65 and a secondary spin estimate is less than 0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.
Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime
2018-02-01
Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.
Inflation in the closed FLRW model and the CMB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonga, Béatrice; Gupt, Brajesh; Yokomizo, Nelson, E-mail: bpb165@psu.edu, E-mail: bgupt@gravity.psu.edu, E-mail: yokomizo@gravity.psu.edu
2016-10-01
Recent cosmic microwave background (CMB) observations put strong constraints on the spatial curvature via estimation of the parameter Ω{sub k} assuming an almost scale invariant primordial power spectrum. We study the evolution of the background geometry and gauge-invariant scalar perturbations in an inflationary closed FLRW model and calculate the primordial power spectrum. We find that the inflationary dynamics is modified due to the presence of spatial curvature, leading to corrections to the nearly scale invariant power spectrum at the end of inflation. When evolved to the surface of last scattering, the resulting temperature anisotropy spectrum ( C {sup TT}{sub ℓ})more » shows deficit of power at low multipoles (ℓ < 20). By comparing our results with the recent Planck data we discuss the role of spatial curvature in accounting for CMB anomalies and in the estimation of the parameter Ω{sub k}. Since the curvature effects are limited to low multipoles, the Planck estimation of cosmological parameters remains robust under inclusion of positive spatial curvature.« less
NASA Astrophysics Data System (ADS)
Dondurur, Derman; Sarı, Coşkun
2004-07-01
A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
Hong Su An; David W. MacFarlane; Christopher W. Woodall
2012-01-01
Standing dead trees are an important component of forest ecosystems. However, reliable estimates of standing dead tree population parameters can be difficult to obtain due to their low abundance and spatial and temporal variation. After 1999, the Forest Inventory and Analysis (FIA) Program began collecting data for standing dead trees at the Phase 2 stage of sampling....
Reboussin, Beth A.; Ialongo, Nicholas S.
2011-01-01
Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
NASA Astrophysics Data System (ADS)
Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin
2017-12-01
Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, C; Jiang, R; Chow, J
2015-06-15
Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describingmore » the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system.« less
Estimation of the health and economic burden of neurocysticercosis in India.
Singh, B B; Khatkar, M S; Gill, J P S; Dhand, N K
2017-01-01
Taenia solium is an endemic parasite in India which occurs in two forms in humans: cysticercosis (infection of soft tissues) and taeniosis (intestinal infection). Neurocysticercosis (NCC) is the most severe form of cysticercosis in which cysts develop in the central nervous system. This study was conducted to estimate health and economic impact due to human NCC-associated active epilepsy in India. Input data were sourced from published research literature, census data and other official records. Economic losses due to NCC-associated active epilepsy were estimated based on cost of treatment, hospitalisation and severe injury as well as loss of income. The disability-adjusted life years (DALYs) due to NCC were estimated by combining years of life lost due to early death and the number of years compromised due to disability taking the disease incidence into account. DALYs were estimated for five age groups, two genders and four regions, and then combined. To account for uncertainty, probability distributions were used for disease incidence data and other input parameters. In addition, sensitivity analyses were conducted to determine the impact of certain input parameters on health and economic estimates. It was estimated that in 2011, human NCC-associated active epilepsy caused an annual median loss of Rupees 12.03 billion (uncertainty interval [95% UI] Rs. 9.16-15.57 billion; US $ 185.14 million) with losses of Rs. 9.78 billion (95% UI Rs. 7.24-13.0 billion; US $ 150.56 million) from the North and Rs. 2.22 billion (95% UI Rs. 1.58-3.06 billion; US $ 34.14 million) from the South. The disease resulted in a total of 2.10 million (95% UI 0.99-4.10 million) DALYs per annum without age weighting and time discounting with 1.81 million (95% UI 0.84-3.57 million) DALYs from the North and 0.28 million (95% UI 0.13-0.55 million) from the South. The health burden per thousand persons per year was 1.73 DALYs (95% UI 0.82-3.39). The results indicate that human NCC causes significant health and economic impact in India. Programs for controlling the disease should be initiated to reduce the socio-economic impact of the disease in India. Copyright © 2016 Elsevier B.V. All rights reserved.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D
2015-01-01
DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements.
Addison, Paul S.; Wang, Rui; Uribe, Alberto A.; Bergese, Sergio D.
2015-01-01
DPOP (ΔPOP or Delta-POP) is a noninvasive parameter which measures the strength of respiratory modulations present in the pulse oximeter waveform. It has been proposed as a noninvasive alternative to pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. We considered a number of simple techniques for better determining the underlying relationship between the two parameters. It was shown numerically that baseline-induced signal errors were asymmetric in nature, which corresponded to observation, and we proposed a method which combines a least-median-of-squares estimator with the requirement that the relationship passes through the origin (the LMSO method). We further developed a method of normalization of the parameters through rescaling DPOP using the inverse gradient of the linear fitted relationship. We propose that this normalization method (LMSO-N) is applicable to the matching of a wide range of clinical parameters. It is also generally applicable to the self-normalizing of parameters whose behaviour may change slightly due to algorithmic improvements. PMID:25691912
SPIPS: Spectro-Photo-Interferometry of Pulsating Stars
NASA Astrophysics Data System (ADS)
Mérand, Antoine
2017-10-01
SPIPS (Spectro-Photo-Interferometry of Pulsating Stars) combines radial velocimetry, interferometry, and photometry to estimate physical parameters of pulsating stars, including presence of infrared excess, color excess, Teff, and ratio distance/p-factor. The global model-based parallax-of-pulsation method is implemented in Python. Derived parameters have a high level of confidence; statistical precision is improved (compared to other methods) due to the large number of data taken into account, accuracy is improved by using consistent physical modeling and reliability of the derived parameters is strengthened by redundancy in the data.
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard
2015-01-01
The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios. PMID:26016909
Daneshmand, Saeed; Jahromi, Ali Jafarnia; Broumandan, Ali; Lachapelle, Gérard
2015-05-26
The use of Space-Time Processing (STP) in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its effectiveness for both narrowband and wideband interference suppression. However, the resulting distortion and bias on the cross correlation functions due to space-time filtering is a major limitation of this technique. Employing the steering vector of the GNSS signals in the filter structure can significantly reduce the distortion on cross correlation functions and lead to more accurate pseudorange measurements. This paper proposes a two-stage interference mitigation approach in which the first stage estimates an interference-free subspace before the acquisition and tracking phases and projects all received signals into this subspace. The next stage estimates array attitude parameters based on detecting and employing GNSS signals that are less distorted due to the projection process. Attitude parameters enable the receiver to estimate the steering vector of each satellite signal and use it in the novel distortionless STP filter to significantly reduce distortion and maximize Signal-to-Noise Ratio (SNR). GPS signals were collected using a six-element antenna array under open sky conditions to first calibrate the antenna array. Simulated interfering signals were then added to the digitized samples in software to verify the applicability of the proposed receiver structure and assess its performance for several interference scenarios.
CTER-rapid estimation of CTF parameters with error assessment.
Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.
Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda
2018-01-01
Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
NASA Technical Reports Server (NTRS)
Wang, Shugong; Liang, Xu
2013-01-01
A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
The Effect of Clustering on Estimations of the UV Ionizing Background from the Proximity Effect
NASA Astrophysics Data System (ADS)
Pascarelle, S. M.; Lanzetta, K. M.; Chen, H. W.
1999-09-01
There have been several determinations of the ionizing background using the proximity effect observed in the distibution of Lyman-alpha absorption lines in the spectra of QSOs at high redshift. It is usually assumed that the distribution of lines should be the same at very small impact parameters to the QSO as it is at large impact parameters, and any decrease in line density at small impact parameters is due to ionizing radiation from the QSO. However, if these Lyman-alpha absorption lines arise in galaxies (Lanzetta et al. 1995, Chen et al. 1998), then the strength of the proximity effect may have been underestimated in previous work, since galaxies are known to cluster around QSOs. Therefore, the UV background estimations have likely been overestimated by the same factor.
Rhodes, Samhita S; Camara, Amadou KS; Ropella, Kristina M; Audi, Said H; Riess, Matthias L; Pagel, Paul S; Stowe, David F
2006-01-01
Background The phase-space relationship between simultaneously measured myoplasmic [Ca2+] and isovolumetric left ventricular pressure (LVP) in guinea pig intact hearts is altered by ischemic and inotropic interventions. Our objective was to mathematically model this phase-space relationship between [Ca2+] and LVP with a focus on the changes in cross-bridge kinetics and myofilament Ca2+ sensitivity responsible for alterations in Ca2+-contraction coupling due to inotropic drugs in the presence and absence of ischemia reperfusion (IR) injury. Methods We used a four state computational model to predict LVP using experimentally measured, averaged myoplasmic [Ca2+] transients from unpaced, isolated guinea pig hearts as the model input. Values of model parameters were estimated by minimizing the error between experimentally measured LVP and model-predicted LVP. Results We found that IR injury resulted in reduced myofilament Ca2+ sensitivity, and decreased cross-bridge association and dissociation rates. Dopamine (8 μM) reduced myofilament Ca2+ sensitivity before, but enhanced it after ischemia while improving cross-bridge kinetics before and after IR injury. Dobutamine (4 μM) reduced myofilament Ca2+ sensitivity while improving cross-bridge kinetics before and after ischemia. Digoxin (1 μM) increased myofilament Ca2+ sensitivity and cross-bridge kinetics after but not before ischemia. Levosimendan (1 μM) enhanced myofilament Ca2+ affinity and cross-bridge kinetics only after ischemia. Conclusion Estimated model parameters reveal mechanistic changes in Ca2+-contraction coupling due to IR injury, specifically the inefficient utilization of Ca2+ for contractile function with diastolic contracture (increase in resting diastolic LVP). The model parameters also reveal drug-induced improvements in Ca2+-contraction coupling before and after IR injury. PMID:16512898
Dosimetric variations due to interfraction organ deformation in cervical cancer brachytherapy.
Kobayashi, Kazuma; Murakami, Naoya; Wakita, Akihisa; Nakamura, Satoshi; Okamoto, Hiroyuki; Umezawa, Rei; Takahashi, Kana; Inaba, Koji; Igaki, Hiroshi; Ito, Yoshinori; Shigematsu, Naoyuki; Itami, Jun
2015-12-01
We quantitatively estimated dosimetric variations due to interfraction organ deformation in multi-fractionated high-dose-rate brachytherapy (HDRBT) for cervical cancer using a novel surface-based non-rigid deformable registration. As the number of consecutive HDRBT fractions increased, simple addition of dose-volume histogram parameters significantly overestimated the dose, compared with distribution-based dose addition. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.
2018-06-01
Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.
Diffusion Weighted Image Denoising Using Overcomplete Local PCA
Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2013-01-01
Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889
Estimation of cardiac conductivities in ventricular tissue by a variational approach
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Veneziani, Alessandro
2015-11-01
The bidomain model is the current standard model to simulate cardiac potential propagation. The numerical solution of this system of partial differential equations strongly depends on the model parameters and in particular on the cardiac conductivities. Unfortunately, it is quite problematic to measure these parameters in vivo and even more so in clinical practice, resulting in no common agreement in the literature. In this paper we consider a variational data assimilation approach to estimating those parameters. We consider the parameters as control variables to minimize the mismatch between the computed and the measured potentials under the constraint of the bidomain system. The existence of a minimizer of the misfit function is proved with the phenomenological Rogers-McCulloch ionic model, that completes the bidomain system. We significantly improve the numerical approaches in the literature by resorting to a derivative-based optimization method with settlement of some challenges due to discontinuity. The improvement in computational efficiency is confirmed by a 2D test as a direct comparison with approaches in the literature. The core of our numerical results is in 3D, on both idealized and real geometries, with the minimal ionic model. We demonstrate the reliability and the stability of the conductivity estimation approach in the presence of noise and with an imperfect knowledge of other model parameters.
Estimating tag loss of the Atlantic Horseshoe crab, Limulus polyphemus, using a multi-state model
Butler, Catherine Alyssa; McGowan, Conor P.; Grand, James B.; Smith, David
2012-01-01
The Atlantic Horseshoe crab, Limulus polyphemus, is a valuable resource along the Mid-Atlantic coast which has, in recent years, experienced new management paradigms due to increased concern about this species role in the environment. While current management actions are underway, many acknowledge the need for improved and updated parameter estimates to reduce the uncertainty within the management models. Specifically, updated and improved estimates of demographic parameters such as adult crab survival in the regional population of interest, Delaware Bay, could greatly enhance these models and improve management decisions. There is however, some concern that difficulties in tag resighting or complete loss of tags could be occurring. As apparent from the assumptions of a Jolly-Seber model, loss of tags can result in a biased estimate and underestimate a survival rate. Given that uncertainty, as a first step towards estimating an unbiased estimate of adult survival, we first took steps to estimate the rate of tag loss. Using data from a double tag mark-resight study conducted in Delaware Bay and Program MARK, we designed a multi-state model to allow for the estimation of mortality of each tag separately and simultaneously.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
Phase History Decomposition for efficient Scatterer Classification in SAR Imagery
2011-09-15
frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter
Spitzer Instrument Pointing Frame (IPF) Kalman Filter Algorithm
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.
2004-01-01
This paper discusses the Spitzer Instrument Pointing Frame (IPF) Kalman Filter algorithm. The IPF Kalman filter is a high-order square-root iterated linearized Kalman filter, which is parametrized for calibrating the Spitzer Space Telescope focal plane and aligning the science instrument arrays with respect to the telescope boresight. The most stringent calibration requirement specifies knowledge of certain instrument pointing frames to an accuracy of 0.1 arcseconds, per-axis, 1-sigma relative to the Telescope Pointing Frame. In order to achieve this level of accuracy, the filter carries 37 states to estimate desired parameters while also correcting for expected systematic errors due to: (1) optical distortions, (2) scanning mirror scale-factor and misalignment, (3) frame alignment variations due to thermomechanical distortion, and (4) gyro bias and bias-drift in all axes. The resulting estimated pointing frames and calibration parameters are essential for supporting on-board precision pointing capability, in addition to end-to-end 'pixels on the sky' ground pointing reconstruction efforts.
NASA Astrophysics Data System (ADS)
Lin, Alexander; Johnson, Lindsay C.; Shokouhi, Sepideh; Peterson, Todd E.; Kupinski, Matthew A.
2015-03-01
In synthetic-collimator SPECT imaging, two detectors are placed at different distances behind a multi-pinhole aperture. This configuration allows for image detection at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. Image multiplexing the undesired overlapping between images due to photon origin uncertainty may occur in both detector planes and is often present in the second detector plane due to greater magnification. However, artifact-free image reconstruction is possible by combining data from both the front detector (little to no multiplexing) and the back detector (noticeable multiplexing). When the two detectors are used in tandem, spatial resolution is increased, allowing for a higher sensitivity-to-detector-area ratio. Due to variability in detector distances and pinhole spacings found in synthetic-collimator SPECT systems, a large parameter space must be examined to determine optimal imaging configurations. We chose to assess image quality based on the task of estimating activity in various regions of a mouse brain. Phantom objects were simulated using mouse brain data from the Magnetic Resonance Microimaging Neurological Atlas (MRM NeAt) and projected at different angles through models of a synthetic-collimator SPECT system, which was developed by collaborators at Vanderbilt University. Uptake in the different brain regions was modeled as being normally distributed about predetermined means and variances. We computed the performance of the Wiener estimator for the task of estimating activity in different regions of the mouse brain. Our results demonstrate the utility of the method for optimizing synthetic-collimator system design.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
Parameter Estimation of a Spiking Silicon Neuron
Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph
2012-01-01
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978
Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.
Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi
2018-05-28
Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.
Lord, Dominique; Park, Peter Young-Jin
2008-07-01
Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D
2016-01-01
Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population. PMID:26403934
Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D
2017-12-01
Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population.
3-D transient hydraulic tomography in unconfined aquifers with fast drainage response
NASA Astrophysics Data System (ADS)
Cardiff, M.; Barrash, W.
2011-12-01
We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.
Petrini, J; Iung, L H S; Rodriguez, M A P; Salvian, M; Pértille, F; Rovadoscki, G A; Cassoli, L D; Coutinho, L L; Machado, P F; Wiggans, G R; Mourão, G B
2016-10-01
Information about genetic parameters is essential for selection decisions and genetic evaluation. These estimates are population specific; however, there are few studies with dairy cattle populations reared under tropical and sub-tropical conditions. Thus, the aim was to obtain estimates of heritability and genetic correlations for milk yield and quality traits using pedigree and genomic information from a Holstein population maintained in a tropical environment. Phenotypic records (n = 36 457) of 4203 cows as well as the genotypes for 57 368 single nucleotide polymorphisms from 755 of these cows were used. Covariance components were estimated using the restricted maximum likelihood method under a mixed animal model, considering a pedigree-based relationship matrix or a combined pedigree-genomic matrix. High heritabilities (around 0.30) were estimated for lactose and protein content in milk whereas moderate values (between 0.19 and 0.26) were obtained for percentages of fat, saturated fatty acids and palmitic acid in milk. Genetic correlations ranging from -0.38 to -0.13 were determined between milk yield and composition traits. The smaller estimates compared to other similar studies can be due to poor environmental conditions, which may reduce genetic variability. These results highlight the importance in using genetic parameters estimated in the population under evaluation for selection decisions. © 2016 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Parametric study of a pin-plane probe in moderately magnetized plasma
NASA Astrophysics Data System (ADS)
Binwal, S.; Gandhi, S.; Kabariya, H.; Karkari, S. K.
2015-12-01
The application of a planar Langmuir probe in magnetized plasma is found to be problematic due to significant perturbation of plasma along the magnetic field lines intercepting the probe surface. This causes the Ampere-Volts ‘I e(U)’ characteristics of the probe to deviate from its usual exponential law; in conjunction the electron saturation current I es is significantly reduced. Moreover estimating the electron temperature T e by considering the entire semi-log plot of I e(U) gives ambiguous values of T e. To address this problem, Pitts and Stangeby developed a formula for the reduction factor for I es. This formula depends on a number of uncertain parameters, namely; the ion temperature T +, electron cross-field diffusion coefficient {{D}\\bot ,\\text{e}} and the local potential hill V h estimated by applying a floating pin probe in the vicinity of the planar probe. Due to implicit dependence of these parameters on T e, the resulting analysis is not straightforward. This paper presents a parametric study of different parameters that influence the characteristics of a planar probe in magnetized plasma. For this purpose a pin-plane probe is constructed and applied in the magnetized plasma column. A comprehensive discussion is presented that highlights the practical methodology of using this technique for extracting useful information of plasma parameters in magnetized plasmas.
Treatment of Missing Data in Workforce Education Research
ERIC Educational Resources Information Center
Gemici, Sinan; Rojewski, Jay W.; Lee, In Heok
2012-01-01
Most quantitative analyses in workforce education are affected by missing data. Traditional approaches to remedy missing data problems often result in reduced statistical power and biased parameter estimates due to systematic differences between missing and observed values. This article examines the treatment of missing data in pertinent…
Carvalho, Alysson R.; Zin, Walter Araujo; Carvalho, Nadja C.; Huhle, Robert; Giannella-Neto, Antonio; Koch, Thea; de Abreu, Marcelo Gama
2014-01-01
Background Measuring esophageal pressure (Pes) using an air-filled balloon catheter (BC) is the common approach to estimate pleural pressure and related parameters. However, Pes is not routinely measured in mechanically ventilated patients, partly due to technical and practical limitations and difficulties. This study aimed at comparing the conventional BC with two alternative methods for Pes measurement, liquid-filled and air-filled catheters without balloon (LFC and AFC), during mechanical ventilation with and without spontaneous breathing activity. Seven female juvenile pigs (32–42 kg) were anesthetized, orotracheally intubated, and a bundle of an AFC, LFC, and BC was inserted in the esophagus. Controlled and assisted mechanical ventilation were applied with positive end-expiratory pressures of 5 and 15 cmH2O, and driving pressures of 10 and 20 cmH2O, in supine and lateral decubitus. Main Results Cardiogenic noise in BC tracings was much larger (up to 25% of total power of Pes signal) than in AFC and LFC (<3%). Lung and chest wall elastance, pressure-time product, inspiratory work of breathing, inspiratory change and end-expiratory value of transpulmonary pressure were estimated. The three catheters allowed detecting similar changes in these parameters between different ventilation settings. However, a non-negligible and significant bias between estimates from BC and those from AFC and LFC was observed in several instances. Conclusions In anesthetized and mechanically ventilated pigs, the three catheters are equivalent when the aim is to detect changes in Pes and related parameters between different conditions, but possibly not when the absolute value of the estimated parameters is of paramount importance. Due to a better signal-to-noise ratio, and considering its practical advantages in terms of easier calibration and simpler acquisition setup, LFC may prove interesting for clinical use. PMID:25247308
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
NASA Astrophysics Data System (ADS)
Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana
2017-11-01
Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.
A biodynamic feedthrough model based on neuromuscular principles.
Venrooij, Joost; Abbink, David A; Mulder, Mark; van Paassen, Marinus M; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H
2014-07-01
A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model-describing limb dynamics-which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability).
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Real-time moving horizon estimation for a vibrating active cantilever
NASA Astrophysics Data System (ADS)
Abdollahpouri, Mohammad; Takács, Gergely; Rohaľ-Ilkiv, Boris
2017-03-01
Vibrating structures may be subject to changes throughout their operating lifetime due to a range of environmental and technical factors. These variations can be considered as parameter changes in the dynamic model of the structure, while their online estimates can be utilized in adaptive control strategies, or in structural health monitoring. This paper implements the moving horizon estimation (MHE) algorithm on a low-cost embedded computing device that is jointly observing the dynamic states and parameter variations of an active cantilever beam in real time. The practical behavior of this algorithm has been investigated in various experimental scenarios. It has been found, that for the given field of application, moving horizon estimation converges faster than the extended Kalman filter; moreover, it handles atypical measurement noise, sensor errors or other extreme changes, reliably. Despite its improved performance, the experiments demonstrate that the disadvantage of solving the nonlinear optimization problem in MHE is that it naturally leads to an increase in computational effort.
NASA Technical Reports Server (NTRS)
Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.
1994-01-01
Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.
NASA Astrophysics Data System (ADS)
Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei
2017-04-01
The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
The Magnus problem in Rodrigues-Hamilton parameters
NASA Astrophysics Data System (ADS)
Koshliakov, V. N.
1984-04-01
The formalism of Rodrigues-Hamilton parameters is applied to the Magnus problem related to the systematic drift of a gimbal-mounted astatic gyroscope due to the nutational vibration of the main axis of the rotor. It is shown that the use of the above formalism makes it possible to limit the analysis to a consideration of a linear system of differential equations written in perturbed values of Rodrigues-Hamilton parameters. A refined formula for the drift of the main axis of the gyroscope rotor is obtained, and an estimation is made of the effect of the truncation of higher-order terms.
NASA Astrophysics Data System (ADS)
Cardiff, Michael; Barrash, Warren; Thoma, Michael; Malama, Bwalya
2011-06-01
SummaryA recently developed unified model for partially-penetrating slug tests in unconfined aquifers ( Malama et al., in press) provides a semi-analytical solution for aquifer response at the wellbore in the presence of inertial effects and wellbore skin, and is able to model the full range of responses from overdamped/monotonic to underdamped/oscillatory. While the model provides a unifying framework for realistically analyzing slug tests in aquifers (with the ultimate goal of determining aquifer properties such as hydraulic conductivity K and specific storage Ss), it is currently unclear whether parameters of this model can be well-identified without significant prior information and, thus, what degree of information content can be expected from such slug tests. In this paper, we examine the information content of slug tests in realistic field scenarios with respect to estimating aquifer properties, through analysis of both numerical experiments and field datasets. First, through numerical experiments using Markov Chain Monte Carlo methods for gauging parameter uncertainty and identifiability, we find that: (1) as noted by previous researchers, estimation of aquifer storage parameters using slug test data is highly unreliable and subject to significant uncertainty; (2) joint estimation of aquifer and skin parameters contributes to significant uncertainty in both unless prior knowledge is available; and (3) similarly, without prior information joint estimation of both aquifer radial and vertical conductivity may be unreliable. These results have significant implications for the types of information that must be collected prior to slug test analysis in order to obtain reliable aquifer parameter estimates. For example, plausible estimates of aquifer anisotropy ratios and bounds on wellbore skin K should be obtained, if possible, a priori. Secondly, through analysis of field data - consisting of over 2500 records from partially-penetrating slug tests in a heterogeneous, highly conductive aquifer, we present some general findings that have applicability to slug testing. In particular, we find that aquifer hydraulic conductivity estimates obtained from larger slug heights tend to be lower on average (presumably due to non-linear wellbore losses) and tend to be less variable (presumably due to averaging over larger support volumes), supporting the notion that using the smallest slug heights possible to produce measurable water level changes is an important strategy when mapping aquifer heterogeneity. Finally, we present results specific to characterization of the aquifer at the Boise Hydrogeophysical Research Site. Specifically, we note that (1) K estimates obtained using a range of different slug heights give similar results, generally within ±20%; (2) correlations between estimated K profiles with depth at closely-spaced wells suggest that K values obtained from slug tests are representative of actual aquifer heterogeneity and not overly affected by near-well media disturbance (i.e., "skin"); (3) geostatistical analysis of K values obtained indicates reasonable correlation lengths for sediments of this type; and (4) overall, K values obtained do not appear to correlate well with porosity data from previous studies.
NASA Astrophysics Data System (ADS)
Fuchs, Christian; Poulenard, Sylvain; Perlot, Nicolas; Riedi, Jerome; Perdigues, Josep
2017-02-01
Optical satellite communications play an increasingly important role in a number of space applications. However, if the system concept includes optical links to the surface of the Earth, the limited availability due to clouds and other atmospheric impacts need to be considered to give a reliable estimate of the system performance. An OGS network is required for increasing the availability to acceptable figures. In order to realistically estimate the performance and achievable throughput in various scenarios, a simulation tool has been developed under ESA contract. The tool is based on a database of 5 years of cloud data with global coverage and can thus easily simulate different optical ground station network topologies for LEO- and GEO-to-ground links. Further parameters, like e.g. limited availability due to sun blinding and atmospheric turbulence, are considered as well. This paper gives an overview about the simulation tool, the cloud database, as well as the modelling behind the simulation scheme. Several scenarios have been investigated: LEO-to-ground links, GEO feeder links, and GEO relay links. The key results of the optical ground station network optimization and throughput estimations will be presented. The implications of key technical parameters, as e.g. memory size aboard the satellite, will be discussed. Finally, potential system designs for LEO- and GEO-systems will be presented.
Liinamo, A E; Karjalainen, L; Ojala, M; Vilva, V
1997-03-01
Data from field trials of Finnish Hounds between 1988 and 1992 in Finland were used to estimate genetic parameters and environmental effects for measures of hunting performance using REML procedures and an animal model. The original data set included 28,791 field trial records from 5,666 dogs. Males and females had equal hunting performance, whereas experience acquired by age improved trial results compared with results for young dogs (P < .001). Results were mostly better on snow than on bare ground (P < .001), and testing areas, years, months, and their interactions affected results (P < .001). Estimates of heritabilities and repeatabilities were low for most of the 28 measures, mainly due to large residual variances. The highest heritabilities were for frequency of tonguing (h2 = .15), pursuit score (h2 = .13), tongue score (h2 = .13), ghost trailing score (h2 = .12), and merit and final score (both h2 = .11). Estimates of phenotypic and genetic correlations were positive and moderate or high for search scores, pursuit scores, and final scores but lower for other studied measures. The results suggest that, due to low heritabilities, evaluation of breeding values for Finnish Hounds with respect to their hunting ability should be based on animal model BLUP methods instead of mere performance testing. The evaluation system of field trials should also be revised for more reliability.
Sepehrinezhad, Alireza; Toufigh, Vahab
2018-05-25
Ultrasonic wave attenuation is an effective descriptor of distributed damage in inhomogeneous materials. Methods developed to measure wave attenuation have the potential to provide an in-site evaluation of existing concrete structures insofar as they are accurate and time-efficient. In this study, material classification and distributed damage evaluation were investigated based on the sinusoidal modeling of the response from the through-transmission ultrasonic tests on polymer concrete specimens. The response signal was modeled as single or the sum of damping sinusoids. Due to the inhomogeneous nature of concrete materials, model parameters may vary from one specimen to another. Therefore, these parameters are not known in advance and should be estimated while the response signal is being received. The modeling procedure used in this study involves a data-adaptive algorithm to estimate the parameters online. Data-adaptive algorithms are used due to a lack of knowledge of the model parameters. The damping factor was estimated as a descriptor of the distributed damage. The results were compared in two different cases as follows: (1) constant excitation frequency with varying concrete mixtures and (2) constant mixture with varying excitation frequencies. The specimens were also loaded up to their ultimate compressive strength to investigate the effect of distributed damage in the response signal. The results of the estimation indicated that the damping was highly sensitive to the change in material inhomogeneity, even in comparable mixtures. In addition to the proposed method, three methods were employed to compare the results based on their accuracy in the classification of materials and the evaluation of the distributed damage. It is shown that the estimated damping factor is not only sensitive to damage in the final stages of loading, but it is also applicable in evaluating micro damages in the earlier stages providing a reliable descriptor of damage. In addition, the modified amplitude ratio method is introduced as an improvement of the classical method. The proposed methods were validated to be effective descriptors of distributed damage. The presented models were also in good agreement with the experimental data. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Soltani, M.; Kunstmann, H.; Laux, P.; Mauder, M.
2016-12-01
In mountainous and prealpine regions echohydrological processes exhibit rapid changes within short distances due to the complex orography and strong elevation gradients. Water- and energy fluxes between the land surface and the atmosphere are crucial drivers for nearly all ecosystem processes. The aim of this research is to analyze the variability of surface water- and energy fluxes by both comprehensive observational hydrometeorological data analysis and process-based high resolution hydrological modeling for a mountainous and prealpine region in Germany. We particularly focus on the closure of the observed energy balance and on the added value of energy flux observations for parameter estimation in our hydrological model (GEOtop) by inverse modeling using PEST. Our study area is the catchment of the river Rott (55 km2), being part of the TERENO prealpine observatory in Southern Germany, and we focus particularly on the observations during the summer episode May to July 2013. We present the coupling of GEOtop and the parameter estimation tool PEST, which is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. Estimation of the surface energy partitioning during the data analysis process revealed that the latent heat flux was considered as the main consumer of available energy. The relative imbalance was largest during nocturnal periods. An energy imbalance was observed at the eddy-covariance site Fendt due to either underestimated turbulent fluxes or overestimated available energy. The calculation of the simulated energy and water balances for the entire catchment indicated that 78% of net radiation leaves the catchment as latent heat flux, 17% as sensible heat, and 5% enters the soil in the form of soil heat flux. 45% of the catchment aggregated precipitation leaves the catchment as discharge and 55% as evaporation. Using the developed GEOtop-PEST interface, the hydrological model is calibrated by comparing simulated and observed discharge, soil moisture and -temperature, sensible-, latent-, and soil heat fluxes. A reasonable quality of fit could be achieved. Uncertainty- and covariance analyses are performed, allowing the derivation of confidence intervals for all estimated parameters.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Quantifying Adventitious Error in a Covariance Structure as a Random Effect
Wu, Hao; Browne, Michael W.
2017-01-01
We present an approach to quantifying errors in covariance structures in which adventitious error, identified as the process underlying the discrepancy between the population and the structured model, is explicitly modeled as a random effect with a distribution, and the dispersion parameter of this distribution to be estimated gives a measure of misspecification. Analytical properties of the resultant procedure are investigated and the measure of misspecification is found to be related to the RMSEA. An algorithm is developed for numerical implementation of the procedure. The consistency and asymptotic sampling distributions of the estimators are established under a new asymptotic paradigm and an assumption weaker than the standard Pitman drift assumption. Simulations validate the asymptotic sampling distributions and demonstrate the importance of accounting for the variations in the parameter estimates due to adventitious error. Two examples are also given as illustrations. PMID:25813463
Standing Genetic Variation and the Evolution of Drug Resistance in HIV
Pennings, Pleuni Simone
2012-01-01
Drug resistance remains a major problem for the treatment of HIV. Resistance can occur due to mutations that were present before treatment starts or due to mutations that occur during treatment. The relative importance of these two sources is unknown. Resistance can also be transmitted between patients, but this process is not considered in the current study. We study three different situations in which HIV drug resistance may evolve: starting triple-drug therapy, treatment with a single dose of nevirapine and interruption of treatment. For each of these three cases good data are available from literature, which allows us to estimate the probability that resistance evolves from standing genetic variation. Depending on the treatment we find probabilities of the evolution of drug resistance due to standing genetic variation between and . For patients who start triple-drug combination therapy, we find that drug resistance evolves from standing genetic variation in approximately 6% of the patients. We use a population-dynamic and population-genetic model to understand the observations and to estimate important evolutionary parameters under the assumption that treatment failure is caused by the fixation of a single drug resistance mutation. We find that both the effective population size of the virus before treatment, and the fitness of the resistant mutant during treatment, are key-parameters which determine the probability that resistance evolves from standing genetic variation. Importantly, clinical data indicate that both of these parameters can be manipulated by the kind of treatment that is used. PMID:22685388
USDA-ARS?s Scientific Manuscript database
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...
NASA Astrophysics Data System (ADS)
Ma, H.
2016-12-01
Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface parameters are generally parameter-specific algorithms and are based on instantaneous physical models, which result in spatial, temporal and physical inconsistencies in current global products. Besides, optical and Thermal Infrared (TIR) remote sensing observations are usually separated to use based on different models , and the Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal that mixes both reflected and emitted fluxes. In this paper, we proposed a unified algorithm for simultaneously retrieving a total of seven land surface parameters, including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Temperature (LST), surface emissivity, downward and upward longwave radiation, by exploiting remote sensing observations from visible to TIR domain based on a common physical Radiative Transfer (RT) model and a data assimilation framework. The coupled PROSPECT-VISIR and 4SAIL RT model were used for canopy reflectance modeling. At first, LAI was estimated using a data assimilation method that combines MODIS daily reflectance observation and a phenology model. The estimated LAI values were then input into the RT model to simulate surface spectral emissivity and surface albedo. Besides, the background albedo and the transmittance of solar radiation, and the canopy albedo were also calculated to produce FAPAR. Once the spectral emissivity of seven MODIS MIR to TIR bands were retrieved, LST can be estimated from the atmospheric corrected surface radiance by exploiting an optimization method. At last, the upward longwave radiation were estimated using the retrieved LST, broadband emissivity (converted from spectral emissivity) and the downward longwave radiation (modeled by MODTRAN). These seven parameters were validated over several representative sites with different biome type, and compared with MODIS and GLASS product. Results showed that this unified inversion algorithm can retrieve temporally complete and physical consistent land surface parameters with high accuracy.
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
The quality estimation of exterior wall’s and window filling’s construction design
NASA Astrophysics Data System (ADS)
Saltykov, Ivan; Bovsunovskaya, Maria
2017-10-01
The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.
Coral reef fish populations can persist without immigration
Salles, Océane C.; Maynard, Jeffrey A.; Joannides, Marc; Barbu, Corentin M.; Saenz-Agudelo, Pablo; Almany, Glenn R.; Berumen, Michael L.; Thorrold, Simon R.; Jones, Geoffrey P.; Planes, Serge
2015-01-01
Determining the conditions under which populations may persist requires accurate estimates of demographic parameters, including immigration, local reproductive success, and mortality rates. In marine populations, empirical estimates of these parameters are rare, due at least in part to the pelagic dispersal stage common to most marine organisms. Here, we evaluate population persistence and turnover for a population of orange clownfish, Amphiprion percula, at Kimbe Island in Papua New Guinea. All fish in the population were sampled and genotyped on five occasions at 2-year intervals spanning eight years. The genetic data enabled estimates of reproductive success retained in the same population (reproductive success to self-recruitment), reproductive success exported to other subpopulations (reproductive success to local connectivity), and immigration and mortality rates of sub-adults and adults. Approximately 50% of the recruits were assigned to parents from the Kimbe Island population and this was stable through the sampling period. Stability in the proportion of local and immigrant settlers is likely due to: low annual mortality rates and stable egg production rates, and the short larval stages and sensory capacities of reef fish larvae. Biannual mortality rates ranged from 0.09 to 0.55 and varied significantly spatially. We used these data to parametrize a model that estimated the probability of the Kimbe Island population persisting in the absence of immigration. The Kimbe Island population was found to persist without significant immigration. Model results suggest the island population persists because the largest of the subpopulations are maintained due to having low mortality and high self-recruitment rates. Our results enable managers to appropriately target and scale actions to maximize persistence likelihood as disturbance frequencies increase. PMID:26582017
Coral reef fish populations can persist without immigration.
Salles, Océane C; Maynard, Jeffrey A; Joannides, Marc; Barbu, Corentin M; Saenz-Agudelo, Pablo; Almany, Glenn R; Berumen, Michael L; Thorrold, Simon R; Jones, Geoffrey P; Planes, Serge
2015-11-22
Determining the conditions under which populations may persist requires accurate estimates of demographic parameters, including immigration, local reproductive success, and mortality rates. In marine populations, empirical estimates of these parameters are rare, due at least in part to the pelagic dispersal stage common to most marine organisms. Here, we evaluate population persistence and turnover for a population of orange clownfish, Amphiprion percula, at Kimbe Island in Papua New Guinea. All fish in the population were sampled and genotyped on five occasions at 2-year intervals spanning eight years. The genetic data enabled estimates of reproductive success retained in the same population (reproductive success to self-recruitment), reproductive success exported to other subpopulations (reproductive success to local connectivity), and immigration and mortality rates of sub-adults and adults. Approximately 50% of the recruits were assigned to parents from the Kimbe Island population and this was stable through the sampling period. Stability in the proportion of local and immigrant settlers is likely due to: low annual mortality rates and stable egg production rates, and the short larval stages and sensory capacities of reef fish larvae. Biannual mortality rates ranged from 0.09 to 0.55 and varied significantly spatially. We used these data to parametrize a model that estimated the probability of the Kimbe Island population persisting in the absence of immigration. The Kimbe Island population was found to persist without significant immigration. Model results suggest the island population persists because the largest of the subpopulations are maintained due to having low mortality and high self-recruitment rates. Our results enable managers to appropriately target and scale actions to maximize persistence likelihood as disturbance frequencies increase. © 2015 The Author(s).
Bayesian calibration for electrochemical thermal model of lithium-ion cells
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-07-01
Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.
Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle; ...
2016-08-03
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less
Boonkum, Wuttigrai; Duangjinda, Monchai
2015-03-01
Heat stress in tropical regions is a major cause that strongly negatively affects to milk production in dairy cattle. Genetic selection for dairy heat tolerance is powerful technique to improve genetic performance. Therefore, the current study aimed to estimate genetic parameters and investigate the threshold point of heat stress for milk yield. Data included 52 701 test-day milk yield records for the first parity from 6247 Thai Holstein dairy cattle, covering the period 1990 to 2007. The random regression test day model with EM-REML was used to estimate variance components, genetic parameters and milk production loss. A decline in milk production was found when temperature and humidity index (THI) exceeded a threshold of 74, also it was associated with the high percentage of Holstein genetics. All variance component estimates increased with THI. The estimate of heritability of test-day milk yield was 0.231. Dominance variance as a proportion to additive variance (0.035) indicated that non-additive effects might not be of concern for milk genetics studies in Thai Holstein cattle. Correlations between genetic and permanent environmental effects, for regular conditions and due to heat stress, were - 0.223 and - 0.521, respectively. The heritability and genetic correlations from this study show that simultaneous selection for milk production and heat tolerance is possible. © 2014 Japanese Society of Animal Science.
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
PHOTOMETRIC ORBITS OF EXTRASOLAR PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Robert A.
We define and analyze the photometric orbit (PhO) of an extrasolar planet observed in reflected light. In our definition, the PhO is a Keplerian entity with six parameters: semimajor axis, eccentricity, mean anomaly at some particular time, argument of periastron, inclination angle, and effective radius, which is the square root of the geometric albedo times the planetary radius. Preliminarily, we assume a Lambertian phase function. We study in detail the case of short-period giant planets (SPGPs) and observational parameters relevant to the Kepler mission: 20 ppm photometry with normal errors, 6.5 hr cadence, and three-year duration. We define a relevantmore » 'planetary population of interest' in terms of probability distributions of the PhO parameters. We perform Monte Carlo experiments to estimate the ability to detect planets and to recover PhO parameters from light curves. We calibrate the completeness of a periodogram search technique, and find structure caused by degeneracy. We recover full orbital solutions from synthetic Kepler data sets and estimate the median errors in recovered PhO parameters. We treat in depth a case of a Jupiter body-double. For the stated assumptions, we find that Kepler should obtain orbital solutions for many of the 100-760 SPGP that Jenkins and Doyle estimate Kepler will discover. Because most or all of these discoveries will be followed up by ground-based radial velocity observations, the estimates of inclination angle from the PhO may enable the calculation of true companion masses: Kepler photometry may break the 'msin i' degeneracy. PhO observations may be difficult. There is uncertainty about how low the albedos of SPGPs actually are, about their phase functions, and about a possible noise floor due to systematic errors from instrumental and stellar sources. Nevertheless, simple detection of SPGPs in reflected light should be robust in the regime of Kepler photometry, and estimates of all six orbital parameters may be feasible in at least a subset of cases.« less
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas
2005-08-01
The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
Realistic uncertainties on Hapke model parameters from photometric measurement
NASA Astrophysics Data System (ADS)
Schmidt, Frédéric; Fernando, Jennifer
2015-11-01
The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide new constraints for future observation campaigns and instrumentations.
Massive data compression for parameter-dependent covariance matrices
NASA Astrophysics Data System (ADS)
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
Parameter identification of material constants in a composite shell structure
NASA Technical Reports Server (NTRS)
Martinez, David R.; Carne, Thomas G.
1988-01-01
One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.
Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.
Pai, Manjunath P; Hong, Joseph; Krop, Lynne
2017-04-01
Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
An Economic Analysis of the Demand for Scientific Journals
ERIC Educational Resources Information Center
Berg, Sanford V.
1972-01-01
The purpose of this study is to demonstrate that economic analysis can be useful in modeling the scientific journal market. Of particular interest is the efficiency of pricing and page policies. To calculate loses due to inefficiencies, demand parameters are statistically estimated and used in a discussion of market efficiency. (3 references)…
Generating multi-scale albedo look-up maps using MODIS BRDF/Albedo products and landsat imagery
USDA-ARS?s Scientific Manuscript database
Surface albedo determines radiative forcing and is a key parameter for driving Earth’s climate. Better characterization of surface albedo for individual land cover types can reduce the uncertainty in estimating changes to Earth’s radiation balance due to land cover change. This paper presents a mult...
Brinker, T; Raymond, B; Bijma, P; Vereijken, A; Ellen, E D
2017-02-01
Mortality of laying hens due to cannibalism is a major problem in the egg-laying industry. Survival depends on two genetic effects: the direct genetic effect of the individual itself (DGE) and the indirect genetic effects of its group mates (IGE). For hens housed in sire-family groups, DGE and IGE cannot be estimated using pedigree information, but the combined effect of DGE and IGE is estimated in the total breeding value (TBV). Genomic information provides information on actual genetic relationships between individuals and might be a tool to improve TBV accuracy. We investigated whether genomic information of the sire increased TBV accuracy compared with pedigree information, and we estimated genetic parameters for survival time. A sire model with pedigree information (BLUP) and a sire model with genomic information (ssGBLUP) were used. We used survival time records of 7290 crossbred offspring with intact beaks from four crosses. Cross-validation was used to compare the models. Using ssGBLUP did not improve TBV accuracy compared with BLUP which is probably due to the limited number of sires available per cross (~50). Genetic parameter estimates were similar for BLUP and ssGBLUP. For both BLUP and ssGBLUP, total heritable variance (T 2 ), expressed as a proportion of phenotypic variance, ranged from 0.03 ± 0.04 to 0.25 ± 0.09. Further research is needed on breeding value estimation for socially affected traits measured on individuals kept in single-family groups. © 2016 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Simon, Ehouarn; Samuelsen, Annette; Bertino, Laurent; Mouysset, Sandrine
2015-12-01
A sequence of one-year combined state-parameter estimation experiments has been conducted in a North Atlantic and Arctic Ocean configuration of the coupled physical-biogeochemical model HYCOM-NORWECOM over the period 2007-2010. The aim is to evaluate the ability of an ensemble-based data assimilation method to calibrate ecosystem model parameters in a pre-operational setting, namely the production of the MyOcean pilot reanalysis of the Arctic biology. For that purpose, four biological parameters (two phyto- and two zooplankton mortality rates) are estimated by assimilating weekly data such as, satellite-derived Sea Surface Temperature, along-track Sea Level Anomalies, ice concentrations and chlorophyll-a concentrations with an Ensemble Kalman Filter. The set of optimized parameters locally exhibits seasonal variations suggesting that time-dependent parameters should be used in ocean ecosystem models. A clustering analysis of the optimized parameters is performed in order to identify consistent ecosystem regions. In the north part of the domain, where the ecosystem model is the most reliable, most of them can be associated with Longhurst provinces and new provinces emerge in the Arctic Ocean. However, the clusters do not coincide anymore with the Longhurst provinces in the Tropics due to large model errors. Regarding the ecosystem state variables, the assimilation of satellite-derived chlorophyll concentration leads to significant reduction of the RMS errors in the observed variables during the first year, i.e. 2008, compared to a free run simulation. However, local filter divergences of the parameter component occur in 2009 and result in an increase in the RMS error at the time of the spring bloom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.
1977-03-01
The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less
NASA Astrophysics Data System (ADS)
Dutta, Argha; Das, Kalipada; Gayathri, N.; Menon, Ranjini; Nabhiraj, P. Y.; Mukherjee, Paramita
2018-03-01
The microstructural parameters such as domain size and microstrain have been estimated from Grazing Incidence X-ray Diffraction (GIXRD) data for Ar9+ irradiated Zr-1Nb-1Sn-0.1Fe sample as a function of dpa (dose). Detail studies using X-ray Diffraction Line Profile Analysis (XRDLPA) from GIXRD data has been carried out to characterize the microstructural parameters like domain size and microstrain. The reorientation of the grains due to effect of irradiation at high dpa (dose) has been qualitatively assessed by the texture parameter P(hkl).
Determination techniques of Archie’s parameters: a, m and n in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Mohamad, A. M.; Hamada, G. M.
2017-12-01
The determination of water saturation in a heterogeneous reservoir is becoming more challenging, as Archie’s equation is only suitable for clean homogeneous formation and Archie’s parameters are highly dependent on the properties of the rock. This study focuses on the measurement of Archie’s parameters in carbonate and sandstone core samples around Malaysian heterogeneous carbonate and sandstone reservoirs. Three techniques for the determination of Archie’s parameters a, m and n will be implemented: the conventional technique, core Archie parameter estimation (CAPE) and the three-dimensional regression technique (3D). By using the results obtained by the three different techniques, water saturation graphs were produced to observe the symbolic difference of Archie’s parameter and its relevant impact on water saturation values. The difference in water saturation values can be primarily attributed to showing the uncertainty level of Archie’s parameters, mainly in carbonate and sandstone rock samples. It is obvious that the accuracy of Archie’s parameters has a profound impact on the calculated water saturation values in carbonate sandstone reservoirs due to regions of high stress reducing electrical conduction resulting from the raised electrical heterogeneity of the heterogeneous carbonate core samples. Due to the unrealistic assumptions involved in the conventional method, it is better to use either the CAPE or 3D method to accurately determine Archie’s parameters in heterogeneous as well as homogeneous reservoirs.
Pedestrian Detection by Laser Scanning and Depth Imagery
NASA Astrophysics Data System (ADS)
Barsi, A.; Lovas, T.; Molnar, B.; Somogyi, A.; Igazvolgyi, Z.
2016-06-01
Pedestrian flow is much less regulated and controlled compared to vehicle traffic. Estimating flow parameters would support many safety, security or commercial applications. Current paper discusses a method that enables acquiring information on pedestrian movements without disturbing and changing their motion. Profile laser scanner and depth camera have been applied to capture the geometry of the moving people as time series. Procedures have been developed to derive complex flow parameters, such as count, volume, walking direction and velocity from laser scanned point clouds. Since no images are captured from the faces of pedestrians, no privacy issues raised. The paper includes accuracy analysis of the estimated parameters based on video footage as reference. Due to the dense point clouds, detailed geometry analysis has been conducted to obtain the height and shoulder width of pedestrians and to detect whether luggage has been carried or not. The derived parameters support safety (e.g. detecting critical pedestrian density in mass events), security (e.g. detecting prohibited baggage in endangered areas) and commercial applications (e.g. counting pedestrians at all entrances/exits of a shopping mall).
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship
NASA Astrophysics Data System (ADS)
Viswanathan, M. N.
1984-02-01
The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.
[Modern principles of the geriatric analysis in medicine].
Volobuev, A N; Zaharova, N O; Romanchuk, N P; Romanov, D V; Romanchuk, P I; Adyshirin-Zade, K A
2016-01-01
The offered methodological principles of the geriatric analysis in medicine enables to plan economic parameters of social protection of the population, necessary amount of medical help financing, to define a structure of the qualified medical personnel training. It is shown that personal health and cognitive longevity of the person depend on the adequate system geriatric analysis and use of biological parameters monitoring in time. That allows estimate efficiency of the combined individual treatment. The geriatric analysis and in particular its genetic-mathematical component aimed at reliability and objectivity of an estimation of the person life expectancy in the country and in region due to the account of influence of mutagen factors as on a gene of the person during his live, and on a population as a whole.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.
2012-01-01
Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.
Determination of the stability and control derivatives of the NASA F/A-18 HARV using flight data
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.; Spagnuolo, Joelle M.
1993-01-01
This report documents the research conducted for the NASA-Ames Cooperative Agreement No. NCC 2-759 with West Virginia University. A complete set of the stability and control derivatives for varying angles of attack from 10 deg to 60 deg were estimated from flight data of the NASA F/A-18 HARV. The data were analyzed with the use of the pEst software which implements the output-error method of parameter estimation. Discussions of the aircraft equations of motion, parameter estimation process, design of flight test maneuvers, and formulation of the mathematical model are presented. The added effects of the thrust vectoring and single surface excitation systems are also addressed. The results of the longitudinal and lateral directional derivative estimates at varying angles of attack are presented and compared to results from previous analyses. The results indicate a significant improvement due to the independent control surface deflections induced by the single surface excitation system, and at the same time, a need for additional flight data especially at higher angles of attack.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Estimating the Effects of the Terminal Area Productivity Program
NASA Technical Reports Server (NTRS)
Lee, David A.; Kostiuk, Peter F.; Hemm, Robert V., Jr.; Wingrove, Earl R., III; Shapiro, Gerald
1997-01-01
The report describes methods and results of an analysis of the technical and economic benefits of the systems to be developed in the NASA Terminal Area Productivity (TAP) program. A runway capacity model using parameters that reflect the potential impact of the TAP technologies is described. The runway capacity model feeds airport specific models which are also described. The capacity estimates are used with a queuing model to calculate aircraft delays, and TAP benefits are determined by calculating the savings due to reduced delays. The report includes benefit estimates for Boston Logan and Detroit Wayne County airports. An appendix includes a description and listing of the runway capacity model.
Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation
NASA Astrophysics Data System (ADS)
Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.
1997-02-01
The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.
Energy awareness for supercapacitors using Kalman filter state-of-charge tracking
NASA Astrophysics Data System (ADS)
Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga
2015-11-01
Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.
Statistical fusion of continuous labels: identification of cardiac landmarks
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.
2011-03-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Divergence-free smoothing for MRV data on stenosed carotid artery phantom flows
NASA Astrophysics Data System (ADS)
Im, Chaehyuk; Ko, Seungbin; Song, Simon
2017-11-01
Magnetic Resonance Velocimetry (MRV) is a versatile technique for measuring flow velocity using an MRI machine. It is frequently used for visualization and analysis of blood flows. However, it is difficult to accurately estimate hemodynamics parameters like wall shear stress (WSS) and oscillatory shear index (OSI) due to its low spatial resolution and low signal-to-noise ratio. We suggest a divergence-free smoothing (DFS) method to correct the erroneous velocity vectors obtained with MRV and improve the estimation accuracy of those parameters. Unlike previous studies on DFS for a wall-free flow, we developed a house code to apply a DFS method to a wall-bounded flow. A Hagen-Poiseuille flow and stenosed carotid artery phantom flows were measured with MRV. Each of them was analyzed for validation of the DFS code and confirmation on the accuracy improvement of hemodynamic parameters. We will discuss the effects of DFS on the improvement of the estimation accuracy of velocity vectors, WSS, OSI and etc in detail This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea government(MSIP) (No. 2016R1A2B3009541).
Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A
2011-01-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Estimation of Gravitation Parameters of Saturnian Moons Using Cassini Attitude Control Flight Data
NASA Technical Reports Server (NTRS)
Krening, Samantha C.
2013-01-01
A major science objective of the Cassini mission is to study Saturnian satellites. The gravitational properties of each Saturnian moon is of interest not only to scientists but also to attitude control engineers. When the Cassini spacecraft flies close to a moon, a gravity gradient torque is exerted on the spacecraft due to the mass of the moon. The gravity gradient torque will alter the spin rates of the reaction wheels (RWA). The change of each reaction wheel's spin rate might lead to overspeed issues or operating the wheel bearings in an undesirable boundary lubrication condition. Hence, it is imperative to understand how the gravity gradient torque caused by a moon will affect the reaction wheels in order to protect the health of the hardware. The attitude control telemetry from low-altitude flybys of Saturn's moons can be used to estimate the gravitational parameter of the moon or the distance between the centers of mass of Cassini and the moon. Flight data from several low altitude flybys of three Saturnian moons, Dione, Rhea, and Enceladus, were used to estimate the gravitational parameters of these moons. Results are compared with values given in the literature.
Gizaw, Solomon; Goshme, Shenkute; Getachew, Tesfaye; Haile, Aynalem; Rischkowsky, Barbara; van Arendonk, Johan; Valle-Zárate, Anne; Dessie, Tadelle; Mwai, Ally Okeyo
2014-06-01
Pedigree recording and genetic selection in village flocks of smallholder farmers have been deemed infeasible by researchers and development workers. This is mainly due to the difficulty of sire identification under uncontrolled village breeding practices. A cooperative village sheep-breeding scheme was designed to achieve controlled breeding and implemented for Menz sheep of Ethiopia in 2009. In this paper, we evaluated the reliability of pedigree recording in village flocks by comparing genetic parameters estimated from data sets collected in the cooperative village and in a nucleus flock maintained under controlled breeding. Effectiveness of selection in the cooperative village was evaluated based on trends in breeding values over generations. Heritability estimates for 6-month weight recorded in the village and the nucleus flock were very similar. There was an increasing trend over generations in average estimated breeding values for 6-month weight in the village flocks. These results have a number of implications: the pedigree recorded in the village flocks was reliable; genetic parameters, which have so far been estimated based on nucleus data sets, can be estimated based on village recording; and appreciable genetic improvement could be achieved in village sheep selection programs under low-input smallholder farming systems.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Swanson, Ryan D; Binley, Andrew; Keating, Kristina; France, Samantha; Osterman, Gordon; Day-Lewis, Frederick D.; Singha, Kamini
2015-01-01
The advection-dispersion equation (ADE) fails to describe commonly observed non-Fickian solute transport in saturated porous media, necessitating the use of other models such as the dual-domain mass-transfer (DDMT) model. DDMT model parameters are commonly calibrated via curve fitting, providing little insight into the relation between effective parameters and physical properties of the medium. There is a clear need for material characterization techniques that can provide insight into the geometry and connectedness of pore spaces related to transport model parameters. Here, we consider proton nuclear magnetic resonance (NMR), direct-current (DC) resistivity, and complex conductivity (CC) measurements for this purpose, and assess these methods using glass beads as a control and two different samples of the zeolite clinoptilolite, a material that demonstrates non-Fickian transport due to intragranular porosity. We estimate DDMT parameters via calibration of a transport model to column-scale solute tracer tests, and compare NMR, DC resistivity, CC results, which reveal that grain size alone does not control transport properties and measured geophysical parameters; rather, volume and arrangement of the pore space play important roles. NMR cannot provide estimates of more-mobile and less-mobile pore volumes in the absence of tracer tests because these estimates depend critically on the selection of a material-dependent and flow-dependent cutoff time. Increased electrical connectedness from DC resistivity measurements are associated with greater mobile pore space determined from transport model calibration. CC was hypothesized to be related to length scales of mass transfer, but the CC response is unrelated to DDMT.
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
Fluid flow in porous media using image-based modelling to parametrize Richards' equation.
Cooper, L J; Daly, K R; Hallett, P D; Naveed, M; Koebernick, N; Bengough, A G; George, T S; Roose, T
2017-11-01
The parameters in Richards' equation are usually calculated from experimentally measured values of the soil-water characteristic curve and saturated hydraulic conductivity. The complex pore structures that often occur in porous media complicate such parametrization due to hysteresis between wetting and drying and the effects of tortuosity. Rather than estimate the parameters in Richards' equation from these indirect measurements, image-based modelling is used to investigate the relationship between the pore structure and the parameters. A three-dimensional, X-ray computed tomography image stack of a soil sample with voxel resolution of 6 μm has been used to create a computational mesh. The Cahn-Hilliard-Stokes equations for two-fluid flow, in this case water and air, were applied to this mesh and solved using the finite-element method in COMSOL Multiphysics. The upscaled parameters in Richards' equation are then obtained via homogenization. The effect on the soil-water retention curve due to three different contact angles, 0°, 20° and 60°, was also investigated. The results show that the pore structure affects the properties of the flow on the large scale, and different contact angles can change the parameters for Richards' equation.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Uncertainty in predictions of forest carbon dynamics: separating driver error from model error.
Spadavecchia, L; Williams, M; Law, B E
2011-07-01
We present an analysis of the relative magnitude and contribution of parameter and driver uncertainty to the confidence intervals on estimates of net carbon fluxes. Model parameters may be difficult or impractical to measure, while driver fields are rarely complete, with data gaps due to sensor failure and sparse observational networks. Parameters are generally derived through some optimization method, while driver fields may be interpolated from available data sources. For this study, we used data from a young ponderosa pine stand at Metolius, Central Oregon, and a simple daily model of coupled carbon and water fluxes (DALEC). An ensemble of acceptable parameterizations was generated using an ensemble Kalman filter and eddy covariance measurements of net C exchange. Geostatistical simulations generated an ensemble of meteorological driving variables for the site, consistent with the spatiotemporal autocorrelations inherent in the observational data from 13 local weather stations. Simulated meteorological data were propagated through the model to derive the uncertainty on the CO2 flux resultant from driver uncertainty typical of spatially extensive modeling studies. Furthermore, the model uncertainty was partitioned between temperature and precipitation. With at least one meteorological station within 25 km of the study site, driver uncertainty was relatively small ( 10% of the total net flux), while parameterization uncertainty was larger, 50% of the total net flux. The largest source of driver uncertainty was due to temperature (8% of the total flux). The combined effect of parameter and driver uncertainty was 57% of the total net flux. However, when the nearest meteorological station was > 100 km from the study site, uncertainty in net ecosystem exchange (NEE) predictions introduced by meteorological drivers increased by 88%. Precipitation estimates were a larger source of bias in NEE estimates than were temperature estimates, although the biases partly compensated for each other. The time scales on which precipitation errors occurred in the simulations were shorter than the temporal scales over which drought developed in the model, so drought events were reasonably simulated. The approach outlined here provides a means to assess the uncertainty and bias introduced by meteorological drivers in regional-scale ecological forecasting.
NASA Astrophysics Data System (ADS)
Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.
2008-12-01
Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.
Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china
NASA Astrophysics Data System (ADS)
Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu
2005-10-01
Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better understanding of relationships of remote sensing data and forest stand parameters used in forest parameter estimation models.
Are camera surveys useful for assessing recruitment in white-tailed deer?
M. Colter Chitwood; Marcus A. Lashley; John C. Kilgo; Michael J. Cherry; L. Mike Conner; Mark Vukovich; H. Scott Ray; Charles Ruth; Robert J. Warren; Christopher S. DePerno; Christopher E. Moorman
2017-01-01
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter...
Robust automatic measurement of 3D scanned models for the human body fat estimation.
Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo
2015-03-01
In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.
Doherty, P.F.; Kendall, W.L.; Sillett, S.; Gustafson, M.; Flint, B.; Naughton, M.; Robbins, C.S.; Pyle, P.; Macintyre, Ian G.
2006-01-01
The effects of fishery practices on black-footed (Phoebastria nigripes) and Laysan albatross (Phoebastria immutabilis) continue to be a source of contention and uncertainty. Some of this uncertainty is a result of a lack of estimates of albatross demographic parameters such as survival. To begin to address these informational needs, a database of albatross banding and encounter records was constructed. Due to uncertainty concerning data collection and validity of assumptions required for mark-recapture analyses, these data should be used with caution. Although demographic parameter estimates are of interest to many, band loss rates, temporary emigration rates, and discontinuous banding effort can confound these estimates. We suggest a number of improvements in data collection that can help ameliorate problems, including the use of double banding and collecting data using a `robust? design. Additionally, sustained banding and encounter efforts are needed to maximize the value of these data. With these modifications, the usefulness of the banding data could be improved markedly.
Parameter estimation and statistical analysis on frequency-dependent active control forces
NASA Astrophysics Data System (ADS)
Lim, Tau Meng; Cheng, Shanbao
2007-07-01
The active control forces of an active magnetic bearing (AMB) system are known to be frequency dependent in nature. This is due to the frequency-dependent nature of the AMB system, i.e. time lags in sensors, digital signal processing, amplifiers, filters, and eddy current and hysteresis losses in the electromagnetic coils. The stiffness and damping coefficients of these control forces can be assumed to be linear for small limit of perturbations within the air gap. Numerous studies have also attempted to estimate these coefficients directly or indirectly without validating the model and verifying the results. This paper seeks to address these issues, by proposing a one-axis electromagnetic suspension system to simplify the measurement requirements and eliminate the possibility of control force cross-coupling capabilities. It also proposes an on-line frequency domain parameter estimation procedure with statistical information to provide a quantitative measure for model validation and results verification purposes. This would lead to a better understanding and a design platform for optimal vibration control scheme for suspended system. This is achieved by injecting Schroeder Phased Harmonic Sequences (SPHS), a multi-frequency test signal, to persistently excite all possible suspended system modes. By treating the system as a black box, the parameter estimation of the "actual" stiffness and damping coefficients in the frequency domain are realised experimentally. The digitally implemented PID controller also facilitated changes on the feedback gains, and this allowed numerous system response measurements with their corresponding estimated stiffness and damping coefficients.
Estimation of sojourn time in chronic disease screening without data on interval cases.
Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W
2000-03-01
Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.
NASA Astrophysics Data System (ADS)
Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula
2016-12-01
We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use efficiency. When compared to the seasonal tuning, the daily tuning is worse on the seasonal scale. However, daily parametrisation reproduced the observations for average diurnal cycle best, except for the GPP for Sodankylä validation period, where half-hourly tuned parameters were better. In general, the daily tuning provided the largest reduction in model-data mismatch. The models response to drought was unaffected by our parametrisations and further studies are needed into enhancing the dry response in JSBACH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Quantum metrology of spatial deformation using arrays of classical and quantum light emitters
NASA Astrophysics Data System (ADS)
Sidhu, Jasminder S.; Kok, Pieter
2017-06-01
We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.
Colored noise effects on batch attitude accuracy estimates
NASA Technical Reports Server (NTRS)
Bilanow, Stephen
1991-01-01
The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
A Bayesian kriging approach for blending satellite and ground precipitation observations
Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.
2015-01-01
Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Study of boro-tellurite glasses doped with neodymium oxide
NASA Astrophysics Data System (ADS)
Sanjay, Kishore, N.; Sheoran, M. S.; Devi, S.
2018-05-01
Borotellurite glasses doped with Nd2O3 [xB2O3(95-x)TeO25Nd2O3] have been prepared by the standard melt-quenching technique. Amorphous nature of the present system was estimated by XRD patterns. The thermal parameters like glass transition temperature (Tg), crystallization (Tc) and melting (Tm) temperatures have been estimated from differential scanning calorimetry (DSC) traces. Density and molar volume have been determined. It was found that Tg is increased due to increasing number of Te-O bonds were replaced by a number of stronger B-O bonds whereas density was decreased with an increase in B2O3 content is due to the higher degree of cross-bonding between the Boron and non-bridging oxygen ions resulting in a strengthening of glass network.
Development of a physiologically based pharmacokinetic model for flunixin in cattle (Bos taurus).
Leavens, Teresa L; Tell, Lisa A; Kissell, Lindsey W; Smith, Geoffrey W; Smith, David J; Wagner, Sarah A; Shelver, Weilin L; Wu, Huali; Baynes, Ronald E; Riviere, Jim E
2014-01-01
Frequent violation of flunixin residues in tissues from cattle has been attributed to non-compliance with the USFDA-approved route of administration and withdrawal time. However, the effect of administration route and physiological differences among animals on tissue depletion has not been determined. The objective of this work was to develop a physiologically based pharmacokinetic (PBPK) model to predict plasma, liver and milk concentrations of flunixin in cattle following intravenous (i.v.), intramuscular (i.m.) or subcutaneous (s.c.) administration for use as a tool to determine factors that may affect the withdrawal time. The PBPK model included blood flow-limited distribution in all tissues and elimination in the liver, kidney and milk. Regeneration of parent flunixin due to enterohepatic recirculation and hydrolysis of conjugated metabolites was incorporated in the liver compartment. Values for physiological parameters were obtained from the literature, and partition coefficients for all tissues but liver and kidney were derived empirically. Liver and kidney partition coefficients and elimination parameters were estimated for 14 pharmacokinetic studies (including five crossover studies) from the literature or government sources in which flunixin was administered i.v., i.m. or s.c. Model simulations compared well with data for the matrices following all routes of administration. Influential model parameters included those that may be age or disease-dependent, such as clearance and rate of milk production. Based on the model, route of administration would not affect the estimated days to reach the tolerance concentration (0.125 mg kg(-1)) in the liver of treated cattle. The majority of USDA-reported violative residues in liver were below the upper uncertainty predictions based on estimated parameters, which suggests the need to consider variability due to disease and age in establishing withdrawal intervals for drugs used in food animals. The model predicted that extravascular routes of administration prolonged flunixin concentrations in milk, which could result in violative milk residues in treated cattle.
Ahn, Yongjun; Yeo, Hwasoo
2015-01-01
The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2011-01-01
Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance Delta V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this Delta V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An low-lunar orbit example demonstrates the Delta V savings from the feasible solution to the optimal solution. The strategy s extensibility to more complex missions is discussed, as well as the limitations of its use.
Modelling ultrasound guided wave propagation for plate thickness measurement
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Dabak, Anand; Murthy, Nitish Krishna
2014-03-01
Structural Health monitoring refers to monitoring the health of plate-like walls of large reactors, pipelines and other structures in terms of corrosion detection and thickness estimation. The objective of this work is modeling the ultrasonic guided waves generated in a plate. The piezoelectric is excited by an input pulse to generate ultrasonic guided lamb waves in the plate that are received by another piezoelectric transducer. In contrast with existing methods, we develop a mathematical model of the direct component of the signal (DCS) recorded at the terminals of the piezoelectric transducer. The DCS model uses maximum likelihood technique to estimate the different parameters, namely the time delay of the signal due to the transducer delay and amplitude scaling of all the lamb wave modes due to attenuation, while taking into account the received signal spreading in time due to dispersion. The maximum likelihood estimate minimizes the energy difference between the experimental and the DCS model-generated signal. We demonstrate that the DCS model matches closely with experimentally recorded signals and show it can be used to estimate thickness of the plate. The main idea of the thickness estimation algorithm is to generate a bank of DCS model-generated signals, each corresponding to a different thickness of the plate and then find the closest match among these signals to the received signal, resulting in an estimate of the thickness of the plate. Therefore our approach provides a complementary suite of analytics to the existing thickness monitoring approaches.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Tran, Anh Phuong; Dafflon, Baptiste; Hubbard, Susan S.; ...
2016-04-25
Improving our ability to estimate the parameters that control water and heat fluxes in the shallow subsurface is particularly important due to their strong control on recharge, evaporation and biogeochemical processes. The objectives of this study are to develop and test a new inversion scheme to simultaneously estimate subsurface hydrological, thermal and petrophysical parameters using hydrological, thermal and electrical resistivity tomography (ERT) data. The inversion scheme-which is based on a nonisothermal, multiphase hydrological model-provides the desired subsurface property estimates in high spatiotemporal resolution. A particularly novel aspect of the inversion scheme is the explicit incorporation of the dependence of themore » subsurface electrical resistivity on both moisture and temperature. The scheme was applied to synthetic case studies, as well as to real datasets that were autonomously collected at a biogeochemical field study site in Rifle, Colorado. At the Rifle site, the coupled hydrological-thermal-geophysical inversion approach well predicted the matric potential, temperature and apparent resistivity with the Nash-Sutcliffe efficiency criterion greater than 0.92. Synthetic studies found that neglecting the subsurface temperature variability, and its effect on the electrical resistivity in the hydrogeophysical inversion, may lead to an incorrect estimation of the hydrological parameters. The approach is expected to be especially useful for the increasing number of studies that are taking advantage of autonomously collected ERT and soil measurements to explore complex terrestrial system dynamics.« less
NASA Astrophysics Data System (ADS)
Arason, P.; Barsotti, S.; De'Michieli Vitturi, M.; Jónsson, S.; Arngrímsson, H.; Bergsson, B.; Pfeffer, M. A.; Petersen, G. N.; Bjornsson, H.
2016-12-01
Plume height and mass eruption rate are the principal scale parameters of explosive volcanic eruptions. Weather radars are important instruments in estimating plume height, due to their independence of daylight, weather and visibility. The Icelandic Meteorological Office (IMO) operates two fixed position C-band weather radars and two mobile X-band radars. All volcanoes in Iceland can be monitored by IMO's radar network, and during initial phases of an eruption all available radars will be set to a more detailed volcano scan. When the radar volume data is retrived at IMO-headquarters in Reykjavík, an automatic analysis is performed on the radar data above the proximity of the volcano. The plume height is automatically estimated taking into account the radar scanning strategy, beam width, and a likely reflectivity gradient at the plume top. This analysis provides a distribution of the likely plume height. The automatically determined plume height estimates from the radar data are used as input to a numerical suite that calculates the eruptive source parameters through an inversion algorithm. This is done by using the coupled system DAKOTA-PlumeMoM which solves the 1D plume model equations iteratively by varying the input values of vent radius and vertical velocity. The model accounts for the effect of wind on the plume dynamics, using atmospheric vertical profiles extracted from the ECMWF numerical weather prediction model. Finally, the resulting estimates of mass eruption rate are used to initialize the dispersal model VOL-CALPUFF to assess hazard due to tephra fallout, and communicated to London VAAC to support their modelling activity for aviation safety purposes.
NASA Astrophysics Data System (ADS)
Dorninger, P.; Koma, Z.; Székely, B.
2012-04-01
In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined either automatically (i.e. estimated of the given data) or manually (i.e. supervised parameter estimation). Additionally, the result might be influenced if data processing is performed locally (i.e. using tiles) or globally. Local processing of the data has the advantages of generally performing faster, having less hardware requirements, and enabling the determination of more detailed information. By contrast, especially in geomorphological interpretation, a global data processing enables determining large scale relations within the dataset analyzed. We investigated the influence of control parameter settings on the geomorphological interpretation on airborne and terrestrial laser scanning data sets of the landslide at Doren (Vorarlberg, Austria), on airborne laser scanning data of the western cordilleras of the central Andes, and on HRSC terrain data of the Mars surface. Topics discussed are the suitability of automated versus manual determination of control parameters, the influence of the definition of the area of interest (local versus global application) as well as computational performance.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
The Applicability of Incoherent Array Processing to IMS Seismic Array Stations
NASA Astrophysics Data System (ADS)
Gibbons, S. J.
2012-04-01
The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea nuclear tests but, due to signal incoherence, failed to contribute to the automatic event detections. It is demonstrated that the smoothed incoherent slowness estimates for the MJAR Pn phases for both tests indicate unambiguously the correct type of phase and a backazimuth estimate within 5 degrees of the great-circle backazimuth. The detection part of the algorithm is applicable to all IMS arrays, and spectrogram-based processing may offer a reduction in the false alarm rate for high frequency signals. Significantly, the local maxima of the scalar functions derived from the transformed spectrogram beams provide good estimates of the signal onset time. High frequency energy is of greater significance for lower event magnitudes and in, for example, the cavity decoupling detection evasion scenario. There is a need to characterize propagation paths with low attenuation of high frequency energy and situations in which parameter estimation on array stations fails.
A study on technical efficiency of a DMU (review of literature)
NASA Astrophysics Data System (ADS)
Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Sankar, J. Ravi
2017-11-01
In this research paper the concept of technical efficiency (due to Farell) [1] of a decision making unit (DMU) has been introduced and the measure of technical and cost efficiencies are derived. Timmer’s [2] deterministic approach to estimate the Cobb-Douglas production frontier has been proposed. The idea of extension of Timmer’s [2] method to any production frontier which is linear in parameters has been presented here. The estimation of parameters of Cobb-Douglas production frontier by linear programming approach has been discussed in this paper. Mark et al. [3] proposed a non-parametric method to assess efficiency. Nuti et al. [4] investigated the relationships among technical efficiency scores, weighted per capita cost and overall performance Gahe Zing Samuel Yank et al. [5] used Data envelopment analysis to assess technical assessment in banking sectors.
Sensorless Estimation and Nonlinear Control of a Rotational Energy Harvester
NASA Astrophysics Data System (ADS)
Nunna, Kameswarie; Toh, Tzern T.; Mitcheson, Paul D.; Astolfi, Alessandro
2013-12-01
It is important to perform sensorless monitoring of parameters in energy harvesting devices in order to determine the operating states of the system. However, physical measurements of these parameters is often a challenging task due to the unavailability of access points. This paper presents, as an example application, the design of a nonlinear observer and a nonlinear feedback controller for a rotational energy harvester. A dynamic model of a rotational energy harvester with its power electronic interface is derived and validated. This model is then used to design a nonlinear observer and a nonlinear feedback controller which yield a sensorless closed-loop system. The observer estimates the mechancial quantities from the measured electrical quantities while the control law sustains power generation across a range of source rotation speeds. The proposed scheme is assessed through simulations and experiments.
Adult survival and population growth rate in Colorado big brown bats (Eptesicus fuscus)
O'Shea, T.J.; Ellison, L.E.; Stanley, T.R.
2011-01-01
We studied adult survival and population growth at multiple maternity colonies of big brown bats (Eptesicus fuscus) in Fort Collins, Colorado. We investigated hypotheses about survival using information-theoretic methods and mark-recapture analyses based on passive detection of adult females tagged with passive integrated transponders. We constructed a 3-stage life-history matrix model to estimate population growth rate (??) and assessed the relative importance of adult survival and other life-history parameters to population growth through elasticity and sensitivity analysis. Annual adult survival at 5 maternity colonies monitored from 2001 to 2005 was estimated at 0.79 (95% confidence interval [95% CI] = 0.77-0.82). Adult survival varied by year and roost, with low survival during an extreme drought year, a finding with negative implications for bat populations because of the likelihood of increasing drought in western North America due to global climate change. Adult survival during winter was higher than in summer, and mean life expectancies calculated from survival estimates were lower than maximum longevity records. We modeled adult survival with recruitment parameter estimates from the same population. The study population was growing (?? = 1.096; 95% CI = 1.057-1.135). Adult survival was the most important demographic parameter for population growth. Growth clearly had the highest elasticity to adult survival, followed by juvenile survival and adult fecundity (approximately equivalent in rank). Elasticity was lowest for fecundity of yearlings. The relative importances of the various life-history parameters for population growth rate are similar to those of large mammals. ?? 2011 American Society of Mammalogists.
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
Evaluation of Oceanic Transport Statistics By Use of Transient Tracers and Bayesian Methods
NASA Astrophysics Data System (ADS)
Trossman, D. S.; Thompson, L.; Mecking, S.; Bryan, F.; Peacock, S.
2013-12-01
Key variables that quantify the time scales over which atmospheric signals penetrate into the oceanic interior and their uncertainties are computed using Bayesian methods and transient tracers from both models and observations. First, the mean residence times, subduction rates, and formation rates of Subtropical Mode Water (STMW) and Subpolar Mode Water (SPMW) in the North Atlantic and Subantarctic Mode Water (SAMW) in the Southern Ocean are estimated by combining a model and observations of chlorofluorocarbon-11 (CFC-11) via Bayesian Model Averaging (BMA), statistical technique that weights model estimates according to how close they agree with observations. Second, a Bayesian method is presented to find two oceanic transport parameters associated with the age distribution of ocean waters, the transit-time distribution (TTD), by combining an eddying global ocean model's estimate of the TTD with hydrographic observations of CFC-11, temperature, and salinity. Uncertainties associated with objectively mapping irregularly spaced bottle data are quantified by making use of a thin-plate spline and then propagated via the two Bayesian techniques. It is found that the subduction of STMW, SPMW, and SAMW is mostly an advective process, but up to about one-third of STMW subduction likely owes to non-advective processes. Also, while the formation of STMW is mostly due to subduction, the formation of SPMW is mostly due to other processes. About half of the formation of SAMW is due to subduction and half is due to other processes. A combination of air-sea flux, acting on relatively short time scales, and turbulent mixing, acting on a wide range of time scales, is likely the dominant SPMW erosion mechanism. Air-sea flux is likely responsible for most STMW erosion, and turbulent mixing is likely responsible for most SAMW erosion. Two oceanic transport parameters, the mean age of a water parcel and the half-variance associated with the TTD, estimated using the model's tracers as data (BayesPOP) and those estimated using tracer observations as data (BayesObs) provide information about the sources of model biases, and give a more nuanced picture than can be found by comparing the simulated CFC-11 concentrations with observed CFC-11 concentrations. Using the differences between the two oceanic transport parameters from BayesObs and those from BayesPOP with and without a constant Peclet number assumption along each of the hydrographic cross-sections considered here, it is found that the model's diffusivity tensor biases lead to larger model errors than the model's mean advection time biases. However, it is also found that mean advection time biases in the model are statistically significant at the 95% level where mode water is found.
Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian
2018-02-01
Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.
Khare, Rahul; Jaramaz, Branislav
2016-12-01
Unicondylar Knee Replacement (UKR) is an orthopedic surgical procedure to reduce pain and improve function in the knee. Load-bearing long-standing antero-posterior (AP) radiographs are typically used postoperatively to measure the leg alignment and assess the varus/valgus implant orientation. However, implant out-of-plane rotations, user variability, and X-ray acquisition parameters introduce errors in the estimation of the implant varus/valgus estimation. Previous work has explored the accuracy of various imaging modalities in this estimation. In this work, we explored the impact of out-of-plane rotations and X-ray acquisition parameters on the estimation of implant component varus/valgus angles. For our study, we used a single CT scan and positioned femoral and tibial implants under varying orientations within the CT volume. Then, a custom software application was used to obtain digitally reconstructed radiographs from the CT scan with implants under varying orientations. Two users were then asked to manually estimate the varus/valgus angles for the implants. We found that there was significant inter-user variability (p < 0.05) in the varus/valgus estimates for the two users. However, the 'ideal' measurements, obtained using actual implant orientations, showed small errors due to variations in implant orientation. We also found that variation in the projection center does not have a statistically significant impact (p < 0.01) on the estimation of implant varus/valgus angles. We conclude that manual estimates of UKR implant varus/valgus orientations are unreliable.
Bouhrara, Mustapha; Spencer, Richard G.
2015-01-01
Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810
Ellington, Sascha R; Devine, Owen; Bertolli, Jeanne; Martinez Quiñones, Alma; Shapiro-Mendoza, Carrie K; Perez-Padilla, Janice; Rivera-Garcia, Brenda; Simeone, Regina M; Jamieson, Denise J; Valencia-Prado, Miguel; Gilboa, Suzanne M; Honein, Margaret A; Johansson, Michael A
2016-10-01
Zika virus (ZIKV) infection during pregnancy is a cause of congenital microcephaly and severe fetal brain defects, and it has been associated with other adverse pregnancy and birth outcomes. To estimate the number of pregnant women infected with ZIKV in Puerto Rico and the number of associated congenital microcephaly cases. We conducted a modeling study from April to July 2016. Using parameters derived from published reports, outcomes were modeled probabilistically using Monte Carlo simulation. We used uncertainty distributions to reflect the limited information available for parameter values. Given the high level of uncertainty in model parameters, interquartile ranges (IQRs) are presented as primary results. Outcomes were modeled for pregnant women in Puerto Rico, which currently has more confirmed ZIKV cases than any other US location. Zika virus infection in pregnant women. Number of pregnant women infected with ZIKV and number of congenital microcephaly cases. We estimated an IQR of 5900 to 10 300 pregnant women (median, 7800) might be infected during the initial ZIKV outbreak in Puerto Rico. Of these, an IQR of 100 to 270 infants (median, 180) may be born with microcephaly due to congenital ZIKV infection from mid-2016 to mid-2017. In the absence of a ZIKV outbreak, an IQR of 9 to 16 cases (median, 12) of congenital microcephaly are expected in Puerto Rico per year. The estimate of 5900 to 10 300 pregnant women that might be infected with ZIKV provides an estimate for the number of infants that could potentially have ZIKV-associated adverse outcomes. Including baseline cases of microcephaly, we estimated that an IQR of 110 to 290 total cases of congenital microcephaly, mostly attributable to ZIKV infection, could occur from mid-2016 to mid-2017 in the absence of effective interventions. The primary limitation in this analysis is uncertainty in model parameters. Multivariate sensitivity analyses indicated that the cumulative incidence of ZIKV infection and risk of microcephaly given maternal infection in the first trimester were the primary drivers of both magnitude and uncertainty in the estimated number of microcephaly cases. Increased information on these parameters would lead to more precise estimates. Nonetheless, the results underscore the need for urgent actions being undertaken in Puerto Rico to prevent congenital ZIKV infection and prepare for affected infants.
Optimizing the Hydrological and Biogeochemical Simulations on a Hillslope with Stony Soil
NASA Astrophysics Data System (ADS)
Zhu, Q.
2017-12-01
Stony soils are widely distributed in the hilly area. However, traditional pedotransfer functions are not reliable in predicting the soil hydraulic parameters for these soils due to the impacts of rock fragments. Therefore, large uncertainties and errors may exist in the hillslope hydrological and biogeochemical simulations in stony soils due to poor estimations of soil hydraulic parameters. In addition, homogenous soil hydraulic parameters are usually used in traditional hillslope simulations. However, soil hydraulic parameters are spatially heterogeneous on the hillslope. This may also cause the unreliable simulations. In this study, we obtained soil hydraulic parameters using five different approaches on a tea hillslope in Taihu Lake basin, China. These five approaches included (1) Rossetta predicted and spatially homogenous, (2) Rossetta predicted and spatially heterogeneous), (3) Rossetta predicted, rock fragment corrected and spatially homogenous, (4) Rossetta predicted, rock fragment corrected and spatially heterogeneous, and (5) extracted from observed soil-water retention curves fitted by dual-pore function and spatially heterogeneous (observed). These five sets of soil hydraulic properties were then input into Hydrus-3D and DNDC to simulate the soil hydrological and biogeochemical processes. The aim of this study is testing two hypotheses. First, considering the spatial heterogeneity of soil hydraulic parameters will improve the simulations. Second, considering the impact of rock fragment on soil hydraulic parameters will improve the simulations.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, P; Corwin, F; Ghita, M
Purpose: Three patient radiation dose monitoring and tracking (PRDMT) systems have been in operation at this institution for the past 6 months. There are useful information that should be disseminated to those who are considering installation of PRDMT programs. In addition, there are “problems” uncovered in the process of estimating fluoroscopic “peak” skin dose (PSD), especially, for those patients who received interventional angiographic studies and in conjunction with surgical procedures. Methods: Upon exporting the PRDMT data to Microsoft Excel program, the peak skin dose can be estimated by applying various correction factors including; attenuation due to the tabletop and examinationmore » mattress, table height, tabletop translation, backscatter, etc. A procedure was established to screen and divide the PRDMT reported radiation dose and estimated PSD to three different levels of threshold to assess the potential skin injuries, to assist patient follow-up, risk management and provide radiation dosimetry information in case of “Sentinel Event”. Results: The Radiation Dose Structured Report (RDSR) was found to be the prerequisite for the PRDMT systems to work seamlessly. And, the geometrical parameters (gantry and table orientation) displayed by the equipment are not necessarily implemented in the “patient centric” manner which could result in a large error in the PSD estimation. Since, the PRDMT systems obtain their pertinent data from the DICOM tags including the polarity (+ and − signs), the geometrical parameters need to be verified. Conclusion: PRDMT systems provide a more accurate PSD estimation than previously possible as the air-kerma-area dose meter become widely implemented. However, care should be exercised to correctly apply the geometrical parameters in estimating the patient dose. In addition, further refinement is necessary for these software programs to account for all geometrical parameters such as the tabletop translation in the z-direction in particular.« less
Marginal estimator for the aberrations of a space telescope by phase diversity
NASA Astrophysics Data System (ADS)
Blanc, Amandine; Mugnier, Laurent; Idier, Jérôme
2017-11-01
In this communication, we propose a novel method for estimating the aberrations of a space telescope from phase diversity data. The images recorded by such a telescope can be degraded by optical aberrations due to design, fabrication or misalignments. Phase diversity is a technique that allows the estimation of aberrations. The only estimator found in the relevant literature is based on a joint estimation of the aberrated phase and the observed object. We recall this approach and study the behavior of this joint estimator by means of simulations. We propose a novel marginal estimator of the sole phase. it is obtained by integrating the observed object out of the problem; indeed, this object is a nuisance parameter in our problem. This reduces drastically the number of unknown and provides better asymptotic properties. This estimator is implemented and its properties are validated by simulation. its performance is equal or even better than that of the joint estimator for the same computing cost.
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2013-10-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian
2013-10-21
Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Reducing bias in survival under non-random temporary emigration
Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann
2014-01-01
Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.
Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure
NASA Astrophysics Data System (ADS)
Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang
2018-04-01
S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.
NASA Technical Reports Server (NTRS)
Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.;
2016-01-01
We used one-way laser ranging data from International Laser Ranging Service (ILRS) ground stations to NASA's Lunar Reconnaissance Orbiter (LRO) for a demonstration of orbit determination. In the one-way setup, the state of LRO and the parameters of the spacecraft and all involved ground station clocks must be estimated simultaneously. This setup introduces many correlated parameters that are resolved by using a priori constraints. More over the observation data coverage and errors accumulating from the dynamical and the clock modeling limit the maximum arc length. The objective of this paper is to investigate the effect of the arc length, the dynamical and modeling accuracy and the observation data coverage on the accuracy of the results. We analyzed multiple arcs using lengths of 2 and 7 days during a one-week period in Science Mission phase 02 (SM02,November2010) and compared the trajectories, the post-fit measurement residuals and the estimated clock parameters. We further incorporated simultaneous passes from multiple stations within the observation data to investigate the expected improvement in positioning. The estimated trajectories were compared to the nominal LRO trajectory and the clock parameters (offset, rate and aging) to the results found in the literature. Arcs estimated with one-way ranging data had differences of 5-30 m compared to the nominal LRO trajectory. While the estimated LRO clock rates agreed closely with the a priori constraints, the aging parameters absorbed clock modeling errors with increasing clock arc length. Because of high correlations between the different ground station clocks and due to limited clock modeling accuracy, their differences only agreed at the order of magnitude with the literature. We found that the incorporation of simultaneous passes requires improved modeling in particular to enable the expected improvement in positioning. We found that gaps in the observation data coverage over 12h (approximately equals 6 successive LRO orbits) prevented the successful estimation of arcs with lengths shorter or longer than 2 or 7 days with our given modeling.
Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan
2010-01-01
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Model based estimation of sediment erosion in groyne fields along the River Elbe
NASA Astrophysics Data System (ADS)
Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard
2008-11-01
River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.
A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Wang, Zhicheng
2017-01-01
In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors’ outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber’s M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS) is under entirely self-contained conditions. PMID:28353682
Ogawa, Takako; Misumi, Masahiro; Sonoike, Kintake
2017-09-01
Cyanobacteria are photosynthetic prokaryotes and widely used for photosynthetic research as model organisms. Partly due to their prokaryotic nature, however, estimation of photosynthesis by chlorophyll fluorescence measurements is sometimes problematic in cyanobacteria. For example, plastoquinone pool is reduced in the dark-acclimated samples in many cyanobacterial species so that conventional protocol developed for land plants cannot be directly applied for cyanobacteria. Even for the estimation of the simplest chlorophyll fluorescence parameter, F v /F m , some additional protocol such as addition of DCMU or illumination of weak blue light is necessary. In this review, those problems in the measurements of chlorophyll fluorescence in cyanobacteria are introduced, and solutions to those problems are given.
Real time estimation of ship motions using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Triantafyllou, M. S.; Bodson, M.; Athans, M.
1983-01-01
The estimation of the heave, pitch, roll, sway, and yaw motions of a DD-963 destroyer is studied, using Kalman filtering techniques, for application in VTOL aircraft landing. The governing equations are obtained from hydrodynamic considerations in the form of linear differential equations with frequency dependent coefficients. In addition, nonminimum phase characteristics are obtained due to the spatial integration of the water wave forces. The resulting transfer matrix function is irrational and nonminimum phase. The conditions for a finite-dimensional approximation are considered and the impact of the various parameters is assessed. A detailed numerical application for a DD-963 destroyer is presented and simulations of the estimations obtained from Kalman filters are discussed.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
Phenological features for winter rapeseed identification in Ukraine using satellite data
NASA Astrophysics Data System (ADS)
Kravchenko, Oleksiy
2014-05-01
Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission
NASA Astrophysics Data System (ADS)
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.
2018-01-01
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10-5. By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10-5 and the Sun's gravitational oblateness, J2⊙J2⊙ = (2.246 ± 0.022) × 10-7. Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, GM⊙°/GM⊙GM⊙°/GM⊙ = (-6.13 ± 1.47) × 10-14, which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain ∣∣G°∣∣/GG°/G to be <4 × 10-14 per year.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
Moore, K L; Mrode, R; Coffey, M P
2017-10-01
Visual Image analysis (VIA) of carcass traits provides the opportunity to estimate carcass primal cut yields on large numbers of slaughter animals. This allows carcases to be better differentiated and farmers to be paid based on the primal cut yields. It also creates more accurate genetic selection due to high volumes of data which enables breeders to breed cattle that better meet the abattoir specifications and market requirements. In order to implement genetic evaluations for VIA primal cut yields, genetic parameters must first be estimated and that was the aim of this study. Slaughter records from the UK prime slaughter population for VIA carcass traits was available from two processing plants. After edits, there were 17 765 VIA carcass records for six primal cut traits, carcass weight as well as the EUROP conformation and fat class grades. Heritability estimates after traits were adjusted for age ranged from 0.32 (0.03) for EUROP fat to 0.46 (0.03) for VIA Topside primal cut yield. Adjusting the VIA primal cut yields for carcass weight reduced the heritability estimates, with estimates of primal cut yields ranging from 0.23 (0.03) for Fillet to 0.29 (0.03) for Knuckle. Genetic correlations between VIA primal cut yields adjusted for carcass weight were very strong, ranging from 0.40 (0.06) between Fillet and Striploin to 0.92 (0.02) between Topside and Silverside. EUROP conformation was also positively correlated with the VIA primal cuts with genetic correlation estimates ranging from 0.59 to 0.84, whereas EUROP fat was estimated to have moderate negative correlations with primal cut yields, estimates ranged from -0.11 to -0.46. Based on these genetic parameter estimates, genetic evaluation of VIA primal cut yields can be undertaken to allow the UK beef industry to select carcases that better meet abattoir specification and market requirements.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
Analysis of genetic diversity in Bolivian llama populations using microsatellites.
Barreta, J; Gutiérrez-Gil, B; Iñiguez, V; Romero, F; Saavedra, V; Chiri, R; Rodríguez, T; Arranz, J J
2013-08-01
South American camelids (SACs) have a major role in the maintenance and potential future of rural Andean human populations. More than 60% of the 3.7 million llamas living worldwide are found in Bolivia. Due to the lack of studies focusing on genetic diversity in Bolivian llamas, this analysis investigates both the genetic diversity and structure of 12 regional groups of llamas that span the greater part of the range of distribution for this species in Bolivia. The analysis of 42 microsatellite markers in the considered regional groups showed that, in general, there were high levels of polymorphism (a total of 506 detected alleles; average PIC across per marker: 0.66), which are comparable with those reported for other populations of domestic SACs. The estimated diversity parameters indicated that there was high intrapopulational genetic variation (average number of alleles and average expected heterozygosity per marker: 12.04 and 0.68, respectively) and weak genetic differentiation among populations (FST range: 0.003-0.052). In agreement with these estimates, Bolivian llamas showed a weak genetic structure and an intense gene flow between all the studied regional groups, which is due to the exchange of reproductive males between the different flocks. Interestingly, the groups for which the largest pairwise FST estimates were observed, Sud Lípez and Nor Lípez, showed a certain level of genetic differentiation that is probably due to the pattern of geographic isolation and limited communication infrastructures of these southern localities. Overall, the population parameters reported here may serve as a reference when establishing conservation policies that address Bolivian llama populations. © 2012 Blackwell Verlag GmbH.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Pewarchuk, W; VanderBoom, J; Blajchman, M A
1992-01-01
A patient blood sample with an unexpectedly high hemoglobin level, high hematocrit, low white blood cell count, and low platelet count was recognized as being spurious based on previously available data. Repeated testing of the original sample showed a gradual return of all parameters to expected levels. We provide evidence that the overfilling of blood collection vacuum tubes can lead to inadequate sample mixing and that, in combination with the settling of the cellular contents in the collection tubes, can result in spuriously abnormal hematological parameters as estimated by an automated method.
NASA Astrophysics Data System (ADS)
Tugendhat, Tim M.; Schäfer, Björn Malte
2018-05-01
We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juxiu Tong; Bill X. Hu; Hai Huang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less
Liang, Yuzhen; Kuo, Dave T F; Allen, Herbert E; Di Toro, Dominic M
2016-10-01
There is concern about the environmental fate and effects of munition constituents (MCs). Polyparameter linear free energy relationships (pp-LFERs) that employ Abraham solute parameters can aid in evaluating the risk of MCs to the environment. However, poor predictions using pp-LFERs and ABSOLV estimated Abraham solute parameters are found for some key physico-chemical properties. In this work, the Abraham solute parameters are determined using experimental partition coefficients in various solvent-water systems. The compounds investigated include hexahydro-1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), octahydro-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX), hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine (MNX), hexahydro-1,3,5-trinitroso-1,3,5-triazine (TNX), hexahydro-1,3-dinitroso-5- nitro-1,3,5-triazine (DNX), 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitrobenzene (TNB), and 4-nitroanisole. The solvents in the solvent-water systems are hexane, dichloromethane, trichloromethane, octanol, and toluene. The only available reported solvent-water partition coefficients are for octanol-water for some of the investigated compounds and they are in good agreement with the experimental measurements from this study. Solvent-water partition coefficients fitted using experimentally derived solute parameters from this study have significantly smaller root mean square errors (RMSE = 0.38) than predictions using ABSOLV estimated solute parameters (RMSE = 3.56) for the investigated compounds. Additionally, the predictions for various physico-chemical properties using the experimentally derived solute parameters agree with available literature reported values with prediction errors within 0.79 log units except for water solubility of RDX and HMX with errors of 1.48 and 2.16 log units respectively. However, predictions using ABSOLV estimated solute parameters have larger prediction errors of up to 7.68 log units. This large discrepancy is probably due to the missing R2NNO2 and R2NNO2 functional groups in the ABSOLV fragment database. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Ogiso, M.
2017-12-01
Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.
Gottfredson, Nisha C.; Bauer, Daniel J.; Baldwin, Scott A.; Okiishi, John C.
2014-01-01
Objective This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missing that are due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and will produce biased results if this assumption is incorrect. Method We use longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and a shared parameter mixture model (SPMM). Results In this dataset, estimates of the rate of improvement during therapy differ by 6.50 – 6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. Conclusion We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PMID:24274626
Forecasting the mortality rates of Malaysian population using Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd
2017-08-01
Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis.
Adolf, Janne K; Voelkle, Manuel C; Brose, Annette; Schmiedek, Florian
2017-01-01
Much of recent affect research relies on intensive longitudinal studies to assess daily emotional experiences. The resulting data are analyzed with dynamic models to capture regulatory processes involved in emotional functioning. Daily contexts, however, are commonly ignored. This may not only result in biased parameter estimates and wrong conclusions, but also ignores the opportunity to investigate contextual effects on emotional dynamics. With fixed moderated time series analysis, we present an approach that resolves this problem by estimating context-dependent change in dynamic parameters in single-subject time series models. The approach examines parameter changes of known shape and thus addresses the problem of observed intra-individual heterogeneity (e.g., changes in emotional dynamics due to observed changes in daily stress). In comparison to existing approaches to unobserved heterogeneity, model estimation is facilitated and different forms of change can readily be accommodated. We demonstrate the approach's viability given relatively short time series by means of a simulation study. In addition, we present an empirical application, targeting the joint dynamics of affect and stress and how these co-vary with daily events. We discuss potentials and limitations of the approach and close with an outlook on the broader implications for understanding emotional adaption and development.
NASA Astrophysics Data System (ADS)
Roy, Kuntal
2017-11-01
There exists considerable confusion in estimating the spin diffusion length of materials with high spin-orbit coupling from spin pumping experiments. For designing functional devices, it is important to determine the spin diffusion length with sufficient accuracy from experimental results. An inaccurate estimation of spin diffusion length also affects the estimation of other parameters (e.g., spin mixing conductance, spin Hall angle) concomitantly. The spin diffusion length for platinum (Pt) has been reported in the literature in a wide range of 0.5-14 nm, and in particular it is a constant value independent of Pt's thickness. Here, the key reasonings behind such a wide range of reported values of spin diffusion length have been identified comprehensively. In particular, it is shown here that a thickness-dependent conductivity and spin diffusion length is necessary to simultaneously match the experimental results of effective spin mixing conductance and inverse spin Hall voltage due to spin pumping. Such a thickness-dependent spin diffusion length is tantamount to the Elliott-Yafet spin relaxation mechanism, which bodes well for transitional metals. This conclusion is not altered even when there is significant interfacial spin memory loss. Furthermore, the variations in the estimated parameters are also studied, which is important for technological applications.
Verifying reddening and extinction for Gaia DR1 TGAS giants
NASA Astrophysics Data System (ADS)
Gontcharov, George A.; Mosenkov, Aleksandr V.
2018-03-01
Gaia DR1 Tycho-Gaia Astrometric Solution parallaxes, Tycho-2 photometry, and reddening/extinction estimates from nine data sources for 38 074 giants within 415 pc from the Sun are used to compare their position in the Hertzsprung-Russell diagram with theoretical estimates, which are based on the PARSEC and MIST isochrones and the TRILEGAL model of the Galaxy with its parameters being widely varied. We conclude that (1) some systematic errors of the reddening/extinction estimates are the main uncertainty in this study; (2) any emission-based 2D reddening map cannot give reliable estimates of reddening within 415 pc due to a complex distribution of dust; (3) if a TRILEGAL's set of the parameters of the Galaxy is reliable and if the solar metallicity is Z < 0.021, then the reddening at high Galactic latitudes behind the dust layer is underestimated by all 2D reddening maps based on the dust emission observations of IRAS, COBE, and Planck and by their 3D followers (we also discuss some explanations of this underestimation); (4) the reddening/extinction estimates from recent 3D reddening map by Gontcharov, including the median reddening E(B - V) = 0.06 mag at |b| > 50°, give the best fit of the empirical and theoretical data with each other.
NASA Astrophysics Data System (ADS)
Lyu, Heng; Wang, Yannan; Jin, Qi; Shi, Lei; Li, Yunmei; Wang, Qiao
2017-10-01
Particulate organic carbon (POC) plays an important role in the carbon cycle in water due to its biological pump process. In the open ocean, algorithms can accurately estimate the surface POC concentration. However, no suitable POC-estimation algorithm based on MERIS bands is available for inland turbid eutrophic water. A total of 228 field samples were collected from Lake Taihu in different seasons between 2013 and 2015. At each site, the optical parameters and water quality were analyzed. Using in situ data, it was found that POC-estimation algorithms developed for the open ocean and coastal waters using remote sensing reflectance were not suitable for inland turbid eutrophic water. The organic suspended matter (OSM) concentration was found to be the best indicator of the POC concentration, and POC has an exponential relationship with the OSM concentration. Through an analysis of the POC concentration and optical parameters, it was found that the absorption peak of total suspended matter (TSM) at 665 nm was the optimum parameter to estimate POC. As a result, MERIS band 7, MERIS band 10 and MERIS band 12 were used to derive the absorption coefficient of TSM at 665 nm, and then, a semi-analytical algorithm was used to estimate the POC concentration for inland turbid eutrophic water. An accuracy assessment showed that the developed semi-analytical algorithm could be successfully applied with a MAPE of 31.82% and RMSE of 2.68 mg/L. The developed algorithm was successfully applied to a MERIS image, and two full-resolution MERIS images, acquired on August 13, 2010, and December 7, 2010, were used to map the POC spatial distribution in Lake Taihu in summer and winter.
Estimation of Alpine Skier Posture Using Machine Learning Techniques
Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej
2014-01-01
High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492
Adult survival and productivity of Northern Fulmars in Alaska
Hatch, Scott A.
1987-01-01
The population dynamics of Northern Fulmars (Fulmarus glacialis) were studied at the Semidi Islands in the western Gulf of Alaska. Fulmars occurred in a broad range of color phases, and annual survival was estimated from the return of birds in the rarer plumage classes. A raw estimate of mean annual survival over a 5-year period was 0.963, but a removal experiment indicated the raw value was probably biased downward. The estimate of annual survival adjusted accordingly was 0.969. Mortality during the breeding season was less than 10% of the annual total, and postbreeding mortality of failed breeders was three to four times higher than that of successful breeders. Breeding success averaged 41% over 9 years. About 5% of experienced birds failed to breed each year due to physical destruction of their breeding sites, mate-loss, or other causes. An estimated 30% of the birds near the colony in one year were of prebreeding age. A comparison of population parameters in Pacific and Atlantic fulmars indicates that higher survival in the prebreeding years is the likely basis for population growth in the northeastern Atlantic. The correlation of breeding success and survival suggests both parameters may decline with age.
Mentzafou, A; Wagner, S; Dimitriou, E
2018-04-29
Identifying the historical hydrometeorological trends in a river basin is necessary for understanding the dominant interactions between climate, human activities and local hydromorphological conditions. Estimating the hydrological reference conditions in a river is also crucial for estimating accurately the impacts from human water related activities and design appropriate water management schemes. In this effort, the output of a regional past climate model was used, covering the period from 1660 to 1990, in combination with a dynamic, spatially distributed, hydrologic model to estimate the past and recent trends in the main hydrologic parameters such as overland flow, water storages and evapotranspiration, in a Mediterranean river basin. The simulated past hydrologic conditions (1660-1960) were compared with the current hydrologic regime (1960-1990), to assess the magnitude of human and natural impacts on the identified hydrologic trends. The hydrological components of the recent period of 2008-2016 were also examined in relation to the impact of human activities. The estimated long-term trends of the hydrologic parameters were partially assigned to varying atmospheric forcing due to volcanic activity combined with spontaneous meteorological fluctuations. Copyright © 2018. Published by Elsevier B.V.
Gruber, Jonathan; Sen, Anindya; Stabile, Mark
2003-09-01
A central parameter for evaluating tax policies is the price elasticity of demand for cigarettes. But in many countries this parameter is difficult to estimate reliably due to widespread smuggling, which significantly biases estimates using legal sales data. An excellent example is Canada, where widespread smuggling in the early 1990s, in response to large tax increases, biases upwards the response of legal cigarette sales to price. We surmount this problem through two approaches: excluding the provinces and years where smuggling was greatest; and using household level expenditure data on smoking. These two approaches yield a tightly estimated elasticity in the range of -0.45 to -0.47. We also show that the sensitivity of smoking to price is much larger among lower income Canadians. In the context of recent behavioral models of smoking, whereby higher taxes reduce unwanted smoking among price sensitive populations, this finding suggests that cigarette taxes may not be as regressive as previously suggested. Finally, we show that price increases on cigarettes do not increase, and may actually decrease, consumption of alcohol; as a result, smuggling of cigarettes may have raised consumption of alcohol as well.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Parametric cost estimation for space science missions
NASA Astrophysics Data System (ADS)
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle
A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Optimal reentry prediction of space objects from LEO using RSM and GA
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
2012-07-01
The accurate estimation of the orbital life time (OLT) of decaying near-Earth objects is of considerable importance for the prediction of risk object re-entry time and hazard assessment as well as for mitigation strategies. Recently, due to the reentries of large number of risk objects, which poses threat to the human life and property, a great concern is developed in the space scientific community all over the World. The evolution of objects in Low Earth Orbit (LEO) is determined by a complex interplay of the perturbing forces, mainly due to atmospheric drag and Earth gravity. These orbits are mostly in low eccentric (eccentricity < 0.2) and have variations in perigee and apogee altitudes due to perturbations during a revolution. The changes in the perigee and apogee altitudes of these orbits are mainly due to the gravitational perturbations of the Earth and the atmospheric density. It has become necessary to use extremely complex force models to match with the present operational requirements and observational techniques. Further the re-entry time of the objects in such orbits is sensitive to the initial conditions. In this paper the problem of predicting re-entry time is attempted as an optimal estimation problem. It is known that the errors are more in eccentricity for the observations based on two line elements (TLEs). Thus two parameters, initial eccentricity and ballistic coefficient, are chosen for optimal estimation. These two parameters are computed with response surface method (RSM) using a genetic algorithm (GA) for the selected time zones, based on rough linear variation of response parameter, the mean semi-major axis during orbit evolution. Error minimization between the observed and predicted mean Semi-major axis is achieved by the application of an optimization algorithm such as Genetic Algorithm (GA). The basic feature of the present approach is that the model and measurement errors are accountable in terms of adjusting the ballistic coefficient and eccentricity. The methodology is tested with the recently reentered objects ROSAT and PHOBOS GRUNT satellites. The study reveals a good agreement with the actual reentry time of these objects. It is also observed that the absolute percentage error in re-entry prediction time for all the two objects is found to be very less. Keywords: low eccentric, Response surface method, Genetic algorithm, apogee altitude, Ballistic coefficient
NASA Astrophysics Data System (ADS)
Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-09-01
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS ⩽ 10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction factor for dose calculation may be used for improved accuracy.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
Seasonal extreme value statistics for precipitation in Germany
NASA Astrophysics Data System (ADS)
Fischer, Madlen; Rust, Henning W.; Ulbrich, Uwe
2013-04-01
Extreme precipitation has a strong influence on environment, society and economy. It leads to large damage due to floods, mudslides, increased erosion or hail. While standard annual return levels are important for hydrological structures, seasonaly resolved return levels provide additional information for risk managment, e.g., for the agricultural sector. For 1208 stations in Germany, we calculate monthly resolved return levels. Instead of estimating parameters separately for every month in the year, we use a non-stationary approach and benefit from smoothly varying return levels throughout the year. This natural approach is more suitable to characterise seasonal variability of extreme precipitation and leads to more accurate return level estimates. Harmonic functions of different orders are used to describe the seasonal variation of GEV parameters and crossvalidation is used to determine a suitable model forall stations. Finally particularly vulnerable regions and associated month are investigated in more detail.
NASA Astrophysics Data System (ADS)
Chaplin, W. J.; Jiménez-Reyes, S. J.; Eff-Darwich, A.; Elsworth, Y.; New, R.
2008-04-01
Frequencies, powers and damping rates of the solar p modes are all observed to vary over the 11-yr solar activity cycle. Here, we show that simultaneous variations in these parameters give rise to a subtle cross-talk effect, which we call the `devil in the detail', that biases p-mode frequencies estimated from analysis of long power frequency spectra. We also show that the resonant peaks observed in the power frequency spectra show small distortions due to the effect. Most of our paper is devoted to a study of the effect for Sun-as-a-star observations of the low-l p modes. We show that for these data the significance of the effect is marginal. We also touch briefly on the likely l dependence of the effect, and discuss the implications of these results for solar structure inversions.
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
An application of robust ridge regression model in the presence of outliers to real data problem
NASA Astrophysics Data System (ADS)
Shariff, N. S. Md.; Ferdaos, N. A.
2017-09-01
Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.
NASA Astrophysics Data System (ADS)
Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.
2017-10-01
We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Estimating brain connectivity when few data points are available: Perspectives and limitations.
Antonacci, Yuri; Toppi, Jlenia; Caschera, Stefano; Anzolin, Alessandra; Mattia, Donatella; Astolfi, Laura
2017-07-01
Methods based on the use of multivariate autoregressive modeling (MVAR) have proved to be an accurate and flexible tool for the estimation of brain functional connectivity. The multivariate approach, however, implies the use of a model whose complexity (in terms of number of parameters) increases quadratically with the number of signals included in the problem. This can often lead to an underdetermined problem and to the condition of multicollinearity. The aim of this paper is to introduce and test an approach based on Ridge Regression combined with a modified version of the statistics usually adopted for these methods, to broaden the estimation of brain connectivity to those conditions in which current methods fail, due to the lack of enough data points. We tested the performances of this new approach, in comparison with the classical approach based on ordinary least squares (OLS), by means of a simulation study implementing different ground-truth networks, under different network sizes and different levels of data points. Simulation results showed that the new approach provides better performances, in terms of accuracy of the parameters estimation and false positives/false negatives rates, in all conditions related to a low data points/model dimension ratio, and may thus be exploited to estimate and validate estimated patterns at single-trial level or when short time data segments are available.
NASA Technical Reports Server (NTRS)
Hocking, W. K.
1989-01-01
The objective of any radar experiment is to determine as much as possible about the entities which scatter the radiation. This review discusses many of the various parameters which can be deduced in a radar experiment, and also critically examines the procedures used to deduce them. Methods for determining the mean wind velocity, the RMS fluctuating velocities, turbulence parameters, and the shapes of the scatterers are considered. Complications with these determinations are discussed. It is seen throughout that a detailed understanding of the shape and cause of the scatterers is important in order to make better determinations of these various quantities. Finally, some other parameters, which are less easily acquired, are considered. For example, it is noted that momentum fluxes due to buoyancy waves and turbulence can be determined, and on occasions radars can be used to determine stratospheric diffusion coefficients and even temperature profiles in the atmosphere.
Thrombus segmentation by texture dynamics from microscopic image sequences
NASA Astrophysics Data System (ADS)
Brieu, Nicolas; Serbanovic-Canic, Jovana; Cvejic, Ana; Stemple, Derek; Ouwehand, Willem; Navab, Nassir; Groher, Martin
2010-03-01
The genetic factors of thrombosis are commonly explored by microscopically imaging the coagulation of blood cells induced by injuring a vessel of mice or of zebrafish mutants. The latter species is particularly interesting since skin transparency permits to non-invasively acquire microscopic images of the scene with a CCD camera and to estimate the parameters characterizing the thrombus development. These parameters are currently determined by manual outlining, which is both error prone and extremely time consuming. Even though a technique for automatic thrombus extraction would be highly valuable for gene analysts, little work can be found, which is mainly due to very low image contrast and spurious structures. In this work, we propose to semi-automatically segment the thrombus over time from microscopic image sequences of wild-type zebrafish larvae. To compensate the lack of valuable spatial information, our main idea consists of exploiting the temporal information by modeling the variations of the pixel intensities over successive temporal windows with a linear Markov-based dynamic texture formalization. We then derive an image from the estimated model parameters, which represents the probability of a pixel to belong to the thrombus. We employ this probability image to accurately estimate the thrombus position via an active contour segmentation incorporating also prior and spatial information of the underlying intensity images. The performance of our approach is tested on three microscopic image sequences. We show that the thrombus is accurately tracked over time in each sequence if the respective parameters controlling prior influence and contour stiffness are correctly chosen.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
NASA Astrophysics Data System (ADS)
Guchhait, Shyamal; Banerjee, Biswanath
2018-04-01
In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.
How does the cosmic large-scale structure bias the Hubble diagram?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Pierre; Clarkson, Chris; Maartens, Roy, E-mail: pierre.fleury@uct.ac.za, E-mail: chris.clarkson@qmul.ac.uk, E-mail: roy.maartens@gmail.com
2017-03-01
The Hubble diagram is one of the cornerstones of observational cosmology. It is usually analysed assuming that, on average, the underlying relation between magnitude and redshift matches the prediction of a Friedmann-Lemaître-Robertson-Walker model. However, the inhomogeneity of the Universe generically biases these observables, mainly due to peculiar velocities and gravitational lensing, in a way that depends on the notion of average used in theoretical calculations. In this article, we carefully derive the notion of average which corresponds to the observation of the Hubble diagram. We then calculate its bias at second-order in cosmological perturbations, and estimate the consequences on themore » inference of cosmological parameters, for various current and future surveys. We find that this bias deeply affects direct estimations of the evolution of the dark-energy equation of state. However, errors in the standard inference of cosmological parameters remain smaller than observational uncertainties, even though they reach percent level on some parameters; they reduce to sub-percent level if an optimal distance indicator is used.« less
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Debris Hazards Due to Overloaded Conventional Construction Facades
2015-12-01
hazards to buildings. This work will present results for experiments involving conventional façade materials (glass, concrete , and mason- ry) that have...ex- periments and a discussion of the distribution parameters are presented. Keywords: Blast, fragmentation, concrete , masonry, debris... concrete , glass, and concrete masonry. It was also desired to produce data for which the state of stress and strain rates could be estimated. There were
An Estimation Method of Vertical Diffusion Parameters in the Mesoscale Range,
1983-12-08
nonuniformity. Also, we write the expression for in the following form. (24) The G(x) in equation (24) has been given in the previous section. Except...Skw. D. L-. J6401 Cmf. ftApplidtea. of Alrt ielt Mtetmugy. N. .2-Due 1 1?7. pp 111-61. CU) S’"-. ’L . P.. ftl 3*vlwrw Of 3.adieaetlk CONUtaMIuemin Ontb
NASA Astrophysics Data System (ADS)
Farhadi, L.; Bateni, S. M.; Auligne, T.; Navari, M.
2017-12-01
Snow emissivity is a key parameter for the estimation of snow surface temperature, which is needed as an initial value in climate models and determination of the outgoing long-wave radiation. Moreover, snow emissivity is required for retrieval of atmospheric parameters (e.g., temperature and humidity profiles) from satellite measurements and satellite data assimilations in numerical weather prediction systems. Microwave emission models and remote sensing data cannot accurately estimate snow emissivity due to limitations attributed to each of them. Existing microwave emission models introduce significant uncertainties in their snow emissivity estimates. This is mainly due to shortcomings of the dense media theory for snow medium at high frequencies, and erroneous forcing variables. The well-known limitations of passive microwave data such as coarse spatial resolution, saturation in deep snowpack, and signal loss in wet snow are the major drawbacks of passive microwave retrieval algorithms for estimation of snow emissivity. A full exploitation of the information contained in the remote sensing data can be achieved by merging them with snow emission models within a data assimilation framework. Such an optimal merging can overcome the specific limitations of models and remote sensing data. An Ensemble Batch Smoother (EnBS) data assimilation framework was developed in this study to combine the synthetically generated passive microwave brightness temperatures at 1.4-, 18.7-, 36.5-, and 89-GHz frequencies with the MEMLS microwave emission model to reduce the uncertainty of the snow emissivity estimates. We have used the EnBS algorithm in the context of observing system simulation experiment (or synthetic experiment) at the local scale observation site (LSOS) of the NASA CLPX field campaign. Our findings showed that the developed methodology significantly improves the estimates of the snow emissivity. The simultaneous assimilation of passive microwave brightness temperatures at all frequencies (i.e., 1.4-, 18.7-, 36.5-, and 89-GHz) reduce the root-mean-square-error (RMSE) of snow emissivity at 1.4-, 18.7-, 36.5-, and 89-GHz (H-pol.) by 80%, 42%, 52%, 40%, respectively compared to the corresponding snow emissivity estimates from the open-loop model.
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
MeProRisk - a Joint Venture for Minimizing Risk in Geothermal Reservoir Development
NASA Astrophysics Data System (ADS)
Clauser, C.; Marquart, G.
2009-12-01
Exploration and development of geothermal reservoirs for the generation of electric energy involves high engineering and economic risks due to the need for 3-D geophysical surface surveys and deep boreholes. The MeProRisk project provides a strategy guideline for reducing these risks by combining cross-disciplinary information from different specialists: Scientists from three German universities and two private companies contribute with new methods in seismic modeling and interpretation, numerical reservoir simulation, estimation of petrophysical parameters, and 3-D visualization. The approach chosen in MeProRisk consists in considering prospecting and developing of geothermal reservoirs as an iterative process. A first conceptual model for fluid flow and heat transport simulation can be developed based on limited available initial information on geology and rock properties. In the next step, additional data is incorporated which is based on (a) new seismic interpretation methods designed for delineating fracture systems, (b) statistical studies on large numbers of rock samples for estimating reliable rock parameters, (c) in situ estimates of the hydraulic conductivity tensor. This results in a continuous refinement of the reservoir model where inverse modelling of fluid flow and heat transport allows infering the uncertainty and resolution of the model at each iteration step. This finally yields a calibrated reservoir model which may be used to direct further exploration by optimizing additional borehole locations, estimate the uncertainty of key operational and economic parameters, and optimize the long-term operation of a geothermal resrvoir.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Thickness of the particle swarm in cosmic ray air showers
NASA Technical Reports Server (NTRS)
Linsley, J.
1985-01-01
The average dispersion in arrival time of air shower particles detected with a scintillator at an impact parameter r is described with accuracy 5-10% by the empirical formula sigma = Sigma sub to (1+r/r sub t) sup b, where Sigma sub to = 2.6 ns, r sub t = 30m and b = (1.94 + or - .08) (0.39 + or - .06) sec Theta, for r 2 km, 10 to the 8th power E 10 to the 11th power GeV, and Theta 60 deg. (E is the primary energy and theta is the zenith angle). The amount of fluctuation in sigma sub t due to fluctuations in the level of origin and shower development is less than 20%. These results provide a basis for estimating the impact parameters of very larger showers with data from very small detector arrays (mini-arrays). The energy of such showers can then be estimated from the local particle density. The formula also provides a basis for estimating the angular resolution of air shower array-telescopes.
Estimation of genetic parameters related to eggshell strength using random regression models.
Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K
2015-01-01
This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Incorporating structure from motion uncertainty into image-based pose estimation
NASA Astrophysics Data System (ADS)
Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen
2015-05-01
A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.
NASA Astrophysics Data System (ADS)
Li, Chao-Ying; Liu, Shi-Fei; Fu, Jin-Xian
2015-11-01
High-order perturbation formulas for a 3d9 ion in rhombically elongated octahedral was applied to calculate the electron paramagnetic resonance (EPR) parameters (the g factors, gi, and the hyperfine structure constants Ai, i = x, y, z) of the rhombic Cu2+ center in CoNH4PO4.6H2O. In the calculations, the required crystal-field parameters are estimated from the superposition model which enables correlation of the crystal-field parameters and hence the EPR parameters with the local structure of the rhombic Cu2+ center. Based on the calculations, the ligand octahedral (i.e. [Cu(H2O)6]2+ cluster) are found to experience the local bond length variations ΔZ (≈0.213 Å) and δr (≈0.132 Å) along axial and perpendicular directions due to the Jahn-Teller effect. Theoretical EPR parameters based on the above local structure are in good agreement with the observed values; the results are discussed.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Gan, Chee Kwan
2017-07-01
Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model
NASA Astrophysics Data System (ADS)
Keum, H.; Han, K.; Kim, H.; Ha, C.
2017-12-01
The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various rainfall event scenarios and parameter optimization. Keywords: Parameters Optimization; PEST model; Urban area Acknowledgement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Uncertainty analysis of a three-parameter Budyko-type equation at annual and monthly time scales
NASA Astrophysics Data System (ADS)
Mianabadi, Ameneh; Alizadeh, Amin; Sanaeinejad, Hossein; Ghahraman, Bijan; Davary, Kamran; Shahedi, Mehri; Talebi, Fatemeh
2017-04-01
The Budyko curves can estimate mean annual evaporation in catchment scale as a function of precipitation and potential evaporation. They are used for the steady-state catchments with the negligible water storage change. In the non-steady-state catchments, especially the irrigated ones, and in the small spatial and temporal scales, the water storage change is not negligible and, therefore, the Budyko curves are limited. In these cases, in addition to precipitation, another water resources are available for evaporation including groundwater depletion and initial soil moisture. Therefore, evaporation exceeds precipitation and the data does not follow the original Budyko framework. In this study, the two-parameter Budyko equation of Greve et al. (2016) was considered. They proposed a Budyko-type equation in which they changed the boundary condition of water-limited line and added a new parameter to the Fu equation. Based on Chen et al. (2013)'s suggestion, in arid regions where aridity index is more than one, the Budyko curve can be shifted to the right direction of aridity index axis. Therefore, in this study, we combined Greve et al. (2016)'s equation and Chen et al. (2013)'s equation and proposed a new equation with three parameters (y0, k, c) to estimate the monthly and annual evaporation of five semi-arid watersheds in Kavir-e-Markazi basin. E- = F(φ,y ,k,c) = 1 + (φ - c)- (1+ (1- y )k-1(φ - c)k)1k P 0 0 In this equation E, P and Φ are evaporation, precipitation and aridity index, respectively. To calibrate the new Budyko curve, we used the evaporation estimated by water balance equation for 11 water years (2002-2012). Due to the variability of watersheds characteristics and climate conditions, we used the GLUE (Generalized Likelihood Uncertainty Estimation) to calibrate the proposed equation to increase the reliability of the model. Based on the GLUE, the parameter sets with the highest value of likelihood were estimated as y0=0.02, k=3.70 and c=3.61 at annual scale and y0=0.07, k=2.50 and c=0.97 at monthly scale. The results showed that the proposed equation can estimate the annual evaporation reasonably with R2=0.93 and RMSE=18.5 mm year-1. Also it can estimate evaporation at monthly scale with R2=0.88 and RMSE=7.9 mm month-1. The posterior distribution function of the parameters showed that parameters uncertainty would decrease by GLUE method, this uncertainty reduction (and therefore the sensitivity of the equation to the parameters) is different for each parameter. Chen, X., Alimohammadi, N., Wang, D. 2013. Modeling interannual variability of seasonal evaporation and storage change based on the extended Budyko framework. Water Resources Research, 49(9):6067-6078. Greve, P., Gudmundsson, L., Orlowsky, B., Seneviratne, S.I. 2016. A two-parameter Budyko function to represent conditions under which evapotranspiration exceeds precipitation. Hydrology and Earth System Sciences, 20(6): 2195-2205. DOI:10.5194/hess-20-2195-2016.
Effects of correlation in transition radiation of super-short electron bunches
NASA Astrophysics Data System (ADS)
Danilova, D. K.; Tishchenko, A. A.; Strikhanov, M. N.
2017-07-01
The effect of correlations between electrons in transition radiation is investigated. The correlation function is obtained with help of the approach similar to the Debye-Hückel theory. The corrections due to correlations are estimated to be near 2-3% for the parameters of future projects SINBAD and FLUTE for bunches with extremely small lengths (∼1-10 fs). For the bunches with number of electrons about ∼ 2.5 ∗1010 and more, and short enough that the radiation would be coherent, the corrections due to correlations are predicted to reach 20%.
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
The Impact of AMSR-E Soil Moisture Assimilation on Evapotranspiration Estimation
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Kumar, Sujay; Mocko, David; Tian, Yudong
2012-01-01
An assessment ofETestimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 CNLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.
NASA Astrophysics Data System (ADS)
Termini, Donatella
2013-04-01
Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77
Black carbon aerosol size in snow.
Schwarz, J P; Gao, R S; Perring, A E; Spackman, J R; Fahey, D W
2013-01-01
The effect of anthropogenic black carbon (BC) aerosol on snow is of enduring interest due to its consequences for climate forcing. Until now, too little attention has been focused on BC's size in snow, an important parameter affecting BC light absorption in snow. Here we present first observations of this parameter, revealing that BC can be shifted to larger sizes in snow than are typically seen in the atmosphere, in part due to the processes associated with BC removal from the atmosphere. Mie theory analysis indicates a corresponding reduction in BC absorption in snow of 40%, making BC size in snow the dominant source of uncertainty in BC's absorption properties for calculations of BC's snow albedo climate forcing. The shift reduces estimated BC global mean snow forcing by 30%, and has scientific implications for our understanding of snow albedo and the processing of atmospheric BC aerosol in snowfall.
A normalisation framework for (hyper-)spectral imagery
NASA Astrophysics Data System (ADS)
Grumpe, Arne; Zirin, Vladimir; Wöhler, Christian
2015-06-01
It is well known that the topography has an influence on the observed reflectance spectra. This influence is not compensated by spectral ratios, i.e. the effect is wavelength dependent. In this work, we present a complete normalisation framework. The surface temperature is estimated based on the measured surface reflectance. To normalise the spectral reflectance with respect to a standard illumination geometry, spatially varying reflectance parameters are estimated based on a non-linear reflectance model. The reflectance parameter estimation has one free parameter, i.e. a low-pass function, which sets the scale of the spatial-variance, i.e. the lateral resolution of the reflectance parameter maps. Since the local surface topography has a major influence on the measured reflectance, often neglected shading information is extracted from the spectral imagery and an existing topography model is refined to image resolution. All methods are demonstrated on the Moon Mineralogy Mapper dataset. Additionally, two empirical methods are introduced that deal with observed systematic reflectance changes in co-registered images acquired at different phase angles. These effects, however, may also be caused by the sensor temperature, due to its correlation with the phase angle. Surface temperatures above 300 K are detected and are very similar to a reference method. The proposed method, however, seems more robust in case of absorptions visible in the reflectance spectrum near 2000 nm. By introducing a low-pass into the computation of the reflectance parameters, the reflectance behaviour of the surfaces may be derived at different scales. This allows for an iterative refinement of the local surface topography using shape from shading and the computation reflectance parameters. The inferred parameters are derived from all available co-registered images and do not show significant influence of the local surface topography. The results of the empirical correction show that both proposed methods greatly reduce the influence of different phase angles or sensor temperatures.
Multispectrum retrieval techniques applied to Venus deep atmosphere and surface problems
NASA Astrophysics Data System (ADS)
Kappel, David; Arnold, Gabriele; Haus, Rainer
The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express is continuously collecting nightside emission data (among others) from Venus. A radiative transfer model of Venus' atmosphere in conjunction with a suitable retrieval algorithm can be used to estimate atmospheric and surface parameters by fitting simulated spectra to the measured data. Because of the limited spectral resolution of VIRTIS-M-IR-spectra, that have been used so far, many different parameter sets can explain the same measurement equally well. As a common regulative measure, reasonable a priori knowledge of some parameters is applied to suppress solutions implausibly far from the expected range. It is beneficial to introduce a parallel coupled retrieval of several measurements. Since spa-tially and temporally contiguous measurements are not expected to originate from completely unrelated parameters, an assumed a priori correlation of the parameters during the retrieval can help to reduce arbitrary fluctuations of the solutions, to avoid subsidiary solutions, and to attenuate the interference of measurement noise by keeping the parameters close to a gen-eral trend. As an illustration, the resulting improvements for some swaths on the Northern hemisphere are presented. Some atmospheric features are still not very well constrained, for instance CO2 absorption under the extreme environmental conditions close to the surface. A broad band continuum due to far wing and collisional induced absorptions is commonly used to correct individual line absorption. Since the spectrally dependent continuum is constant for all measurements, the retrieval of parameters common to all spectra may be used to give some estimates of the continuum absorption. These estimates are necessary, for example, for the coupled parallel retrieval of a consistent local cloud modal composition, which in turn enables a refined surface emissivity retrieval. We gratefully acknowledge the support from the VIRTIS/Venus Express Team, from ASI, CNES, CNRS, and from the DFG funding the ongoing work.
Multisubstrate biodegradation kinetics of naphthalene, phenanthrene, and pyrene mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, S.; Peters, C.A.; Jaffe, P.R.
Biodegradation kinetics of naphthalene, phenanthrene and pyrene were studied in sole-substrate systems, and in binary and ternary mixtures to examine substrate interactions. The experiments were conducted in aerobic batch aqueous systems inoculated with a mixed culture that had been isolated from soils contaminated with polycyclic aromatic hydrocarbons (PAHs). Monod kinetic parameters and yield coefficients for the individual parameters and yield coefficients for the individual compounds were estimated from substrate depletion and CO{sub 2} evolution rate data in sole-substrate experiments. In all three binary mixture experiments, biodegradation kinetics were comparable to the sole-substrate kinetics. In the ternary mixture, biodegradation of naphthalenemore » was inhibited and the biodegradation rates of phenanthrene and pyrene were enhanced. A multisubstrate form of the Monod kinetic model was found to adequately predict substrate interactions in the binary and ternary mixtures using only the parameters derived from sole-substrate experiments. Numerical simulations of biomass growth kinetics explain the observed range of behaviors in PAH mixtures. In general, the biodegradation rates of the more degradable and abundant compounds are reduced due to competitive inhibition, but enhanced biodegradation of the more recalcitrant PAHs occurs due to simultaneous biomass growth on multiple substrates. In PAH-contaminated environments, substrate interactions may be very large due to additive effects from the large number of compounds present.« less
Ultrasonic Characterization of Microstructural Changes in Ti-10V-4.5Fe-1.5Al β-Titanium Alloy
NASA Astrophysics Data System (ADS)
Viswanath, A.; Kumar, Anish; Jayakumar, T.; Purnachandra Rao, B.
2015-08-01
Ultrasonic measurements have been carried out in Ti-10V-4.5Fe-1.5Al β-titanium alloy specimens subjected to β annealing at 1173 K (900 °C) for 1 hour followed by heat treatment in the temperature range of 823 K to 1173 K (550 °C to 900 °C) at an interval of 50 K (50 °C) for 1 hour, followed by water quenching. Ultrasonic parameters such as ultrasonic longitudinal wave velocity, ultrasonic shear wave velocity, shear anisotropy parameter, ultrasonic attenuation, and normalized nonlinear ultrasonic parameter have been correlated with various microstructural changes to understand the interaction of the propagating ultrasonic wave with microstructural features in the alloy. Simulation studies using JMatPro® software and X-ray diffraction measurements have been carried out to estimate the α-phase volume fraction in the specimens heat treated below the β-transus temperature (BTT). It is found that the α-phase (HCP) volume fraction increases from 0 to 52 pct, with decrease in the temperature from 1073 K to 823 K (800 °C to 550 °C). Ultrasonic longitudinal and shear wave velocities are found to increase with decrease in the heat treatment temperature below the BTT, and they exhibited linear relationships with the α-phase volume fraction. Thickness-independent ultrasonic parameters, Poisson's ratio, and the shear anisotropy parameter exhibited the opposite behavior, i.e., decrease with increase in the α-phase consequent to decrease in the heat treatment temperature from 1073 K to 823 K (800 °C to 550 °C). Ultrasonic attenuation is found to decrease from 0.7 dB/mm for the β-annealed specimen to 0.23 dB/mm in the specimen heat treated at 823 K (550 °C) due to the combined effect of the decrease in the β-phase (BCC) with higher damping characteristics and the reduction in scattering due to randomization of β grains with the precipitation of α-phase. Normalized nonlinear ultrasonic parameter is found to increase with increase in the α-phase volume fraction due to increased interfacial strain. For the first time, quantitative correlations established between various ultrasonic parameters and the volume fraction of α-phase in a β-titanium alloy are reported in the present paper. The established correlations are useful for estimation of volume fraction of α-phase in heat-treated β-titanium alloy, by nondestructive ultrasonic measurements.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
NASA Technical Reports Server (NTRS)
Huynh, Loc C.; Duval, R. W.
1986-01-01
The use of Redundant Asynchronous Multiprocessor System to achieve ultrareliable Fault Tolerant Control Systems shows great promise. The development has been hampered by the inability to determine whether differences in the outputs of redundant CPU's are due to failures or to accrued error built up by slight differences in CPU clock intervals. This study derives an analytical dynamic model of the difference between redundant CPU's due to differences in their clock intervals and uses this model with on-line parameter identification to idenitify the differences in the clock intervals. The ability of this methodology to accurately track errors due to asynchronisity generate an error signal with the effect of asynchronisity removed and this signal may be used to detect and isolate actual system failures.
Bayesian Source Attribution of Salmonellosis in South Australia.
Glass, K; Fearnley, E; Hocking, H; Raupach, J; Veitch, M; Ford, L; Kirk, M D
2016-03-01
Salmonellosis is a significant cause of foodborne gastroenteritis in Australia, and rates of illness have increased over recent years. We adopt a Bayesian source attribution model to estimate the contribution of different animal reservoirs to illness due to Salmonella spp. in South Australia between 2000 and 2010, together with 95% credible intervals (CrI). We excluded known travel associated cases and those of rare subtypes (fewer than 20 human cases or fewer than 10 isolates from included sources over the 11-year period), and the remaining 76% of cases were classified as sporadic or outbreak associated. Source-related parameters were included to allow for different handling and consumption practices. We attributed 35% (95% CrI: 20-49) of sporadic cases to chicken meat and 37% (95% CrI: 23-53) of sporadic cases to eggs. Of outbreak-related cases, 33% (95% CrI: 20-62) were attributed to chicken meat and 59% (95% CrI: 29-75) to eggs. A comparison of alternative model assumptions indicated that biases due to possible clustering of samples from sources had relatively minor effects on these estimates. Analysis of source-related parameters showed higher risk of illness from contaminated eggs than from contaminated chicken meat, suggesting that consumption and handling practices potentially play a bigger role in illness due to eggs, considering low Salmonella prevalence on eggs. Our results strengthen the evidence that eggs and chicken meat are important vehicles for salmonellosis in South Australia. © 2015 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, T.; Karapatakis, D.; Lee, P.
2010-08-06
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6) Air pathway collective doses may go down about 30 percent mainly due to the decrease in the inhalation rate assumed for the average individual.« less
Robust estimation of fetal heart rate from US Doppler signals
NASA Astrophysics Data System (ADS)
Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis
2010-01-01
Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.
Mapping forest canopy fuels in Yellowstone National Park using lidar and hyperspectral data
NASA Astrophysics Data System (ADS)
Halligan, Kerry Quinn
The severity and size of wildland fires in the forested western U.S have increased in recent years despite improvements in fire suppression efficiency. This, along with increased density of homes in the wildland-urban interface, has resulted in high costs for fire management and increased risks to human health, safety and property. Crown fires, in comparison to surface fires, pose an especially high risk due to their intensity and high rate of spread. Crown fire models require a range of quantitative fuel parameters which can be difficult and costly to obtain, but advances in lidar and hyperspectral sensor technologies hold promise for delivering these inputs. Further research is needed, however, to assess the strengths and limitations of these technologies and the most appropriate analysis methodologies for estimating crown fuel parameters from these data. This dissertation focuses on retrieving critical crown fuel parameters, including canopy height, canopy bulk density and proportion of dead canopy fuel, from airborne lidar and hyperspectral data. Remote sensing data were used in conjunction with detailed field data on forest parameters and surface reflectance measurements. A new method was developed for retrieving Digital Surface Model (DSM) and Digital Canopy Models (DCM) from first return lidar data. Validation data on individual tree heights demonstrated the high accuracy (r2 0.95) of the DCMs developed via this new algorithm. Lidar-derived DCMs were used to estimate critical crown fire parameters including available canopy fuel, canopy height and canopy bulk density with linear regression model r2 values ranging from 0.75 to 0.85. Hyperspectral data were used in conjunction with Spectral Mixture Analysis (SMA) to assess fuel quality in the form of live versus dead canopy proportions. Severity and stage of insect-caused forest mortality were estimated using the fractional abundance of green vegetation, non-photosynthetic vegetation and shade obtained from SMA. Proportion of insect attack was estimated with a linear model producing an r2 of 0.6 using SMA and bark endmembers from image and reference libraries. Fraction of red attack, with a possible link to increased crown fire risk, was estimated with an r2 of 0.45.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.
Guaranteed convergence of the Hough transform
NASA Astrophysics Data System (ADS)
Soffer, Menashe; Kiryati, Nahum
1995-01-01
The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Novel Estimation of Pilot Performance Characteristics
NASA Technical Reports Server (NTRS)
Bachelder, Edward N.; Aponso, Bimal
2017-01-01
Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.
Ayalew, Wondossen; Aliy, Mohammed; Negussie, Enyew
2017-11-01
This study estimated the genetic parameters for productive and reproductive traits. The data included production and reproduction records of animals that have calved between 1979 and 2013. The genetic parameters were estimated using multivariate mixed models (DMU) package, fitting univariate and multivariate mixed models with average information restricted maximum likelihood algorithm. The estimates of heritability for milk production traits from the first three lactation records were 0.03±0.03 for lactation length (LL), 0.17±0.04 for lactation milk yield (LMY), and 0.15±0.04 for 305 days milk yield (305-d MY). For reproductive traits the heritability estimates were, 0.09±0.03 for days open (DO), 0.11±0.04 for calving interval (CI), and 0.47±0.06 for age at first calving (AFC). The repeatability estimates for production traits were 0.12±0.02, for LL, 0.39±0.02 for LMY, and 0.25±0.02 for 305-d MY. For reproductive traits the estimates of repeatability were 0.19±0.02 for DO, and to 0.23±0.02 for CI. The phenotypic correlations between production and reproduction traits ranged from 0.08±0.04 for LL and AFC to 0.42±0.02 for LL and DO. The genetic correlation among production traits were generally high (>0.7) and between reproductive traits the estimates ranged from 0.06±0.13 for AFC and DO to 0.99±0.01 between CI and DO. Genetic correlations of productive traits with reproductive traits were ranged from -0.02 to 0.99. The high heritability estimates observed for AFC indicated that reasonable genetic improvement for this trait might be possible through selection. The h2 and r estimates for reproductive traits were slightly different from single versus multi-trait analyses of reproductive traits with production traits. As single-trait method is biased due to selection on milk yield, a multi-trait evaluation of fertility with milk yield is recommended.
Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective
NASA Astrophysics Data System (ADS)
Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.
2018-05-01
Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1991-01-01
A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.
A study into the loss of lock of the space telescope fine guidance sensor
NASA Technical Reports Server (NTRS)
Polites, M. E.
1983-01-01
The results of a study into the loss of lock phenomenon associated with the Space Telescope Fine Guidance Sensor (FGS) are documented. The primary cause of loss of lock has been found to be a combination of cosmic ray spikes and photon noise due to a 14.5 Mv star. The probability of maintaining lock versus time is estimated both for the baseline FGS design and with parameter changes in the FGS firmware which will improve the probability of maintaining lock. The parameters varied are changeable in-flight from the ground and hence do not impact the design of the FGS hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larraga-Gutierrez, J. M.; Ballesteros-Zebadua, P.; Garcia-Garduno, O. A.
2008-08-11
Radiation transmission, leakage and beam penumbra are essential dosimetric parameters related to the commissioning of a multileaf collimation system. This work shows a comparative analysis of commonly used film detectors: X-OMAT V2 and EDR2 radiographic films, and GafChromic EBT registered radiochromic film. The results show that X-OMAT over-estimates radiation leakage and 80-20% beam penumbra. However, according to the reference values reported by the manufacturer for these dosimetric parameters, all three films are adequate for MLC dosimetric characterization, but special care must be taken when X-OMAT V2 film is used due to its low energy photon dependence.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Wang, Kon-Sheng Charles Wang
1996-01-01
The lateral-directional stability and control derivatives of the X-29A number 2 are extracted from flight data over an angle-of-attack range of 4 degrees to 53 degrees using a parameter identification algorithm. The algorithm uses the linearized aircraft equations of motion and a maximum likelihood estimator in the presence of state and measurement noise. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft at angles of attack above 15 degrees. The results supported the flight-envelope-expansion phase of the X-29A number 2 by helping to update the aerodynamic mathematical model, to improve the real-time simulator, and to revise flight control system laws. Effects of the aircraft high gain flight control system on maneuver quality and the estimated derivatives are also discussed. The derivatives are plotted as functions of angle of attack and compared with the predicted aerodynamic database. Agreement between predicted and flight values is quite good for some derivatives such as the lateral force due to sideslip, the lateral force due to rudder deflection, and the rolling moment due to roll rate. The results also show significant differences in several important derivatives such as the rolling moment due to sideslip, the yawing moment due to sideslip, the yawing moment due to aileron deflection, and the yawing moment due to rudder deflection.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Lu, Z; MacMahon, H
Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less
Integration of manatee life-history data and population modeling
Eberhardt, L.L.; O'Shea, Thomas J.; O'Shea, Thomas J.; Ackerman, B.B.; Percival, H. Franklin
1995-01-01
Aerial counts and the number of deaths have been a major focus of attention in attempts to understand the population status of the Florida manatee (Trichechus manatus latirostris). Uncertainties associated with these data have made interpretation difficult. However, knowledge of manatee life-history attributes increased and now permits the development of a population model. We describe a provisional model based on the classical approach of Lotka. Parameters in the model are based on data from'other papers in this volume and draw primarily on observations from the Crystal River, Blue Spring, and Adantic Coast areas. The model estimates X (the finite rate ofincrease) at each study area, and application ofthe delta method provides estimates of variance components and partial derivatives ofX with respectto key input parameters (reproduction, adult survival, and early survival). In some study areas, only approximations of some parameters are available. Estimates of X and coefficients of variation (in parentheses) of manatees were 1.07 (0.009) in the Crystal River, 1.06 (0.012) at Blue Spring, and 1.01 (0.012) on the Atlantic Coast. Changing adult survival has a major effect on X. Early-age survival has the smallest effect. Bootstrap comparisons of population growth estimates from trend counts in the Crystal River and at Blue Spring and the reproduction and survival data suggest that the higher, observed rates from counts are probably not due to chance. Bootstrapping for variance estimates based on reproduction and survival data from manatees at Blue Spring and in the Crystal River provided estimates of X, adult survival, and rates of reproduction that were similar to those obtained by other methods. Our estimates are preliminary and suggestimprovements for future data collection and analysis. However, results support efforts to reduce mortality as the most effective means to promote the increased growth necessary for the eventual recovery of the Florida manatee population.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Mapping Natech risk due to earthquakes using RAPID-N
NASA Astrophysics Data System (ADS)
Girgin, Serkan; Krausmann, Elisabeth
2013-04-01
Natural hazard-triggered technological accidents (so-called Natech accidents) at hazardous installations are an emerging risk with possibly serious consequences due to the potential for release of hazardous materials, fires or explosions. For the reduction of Natech risk, one of the highest priority needs is the identification of Natech-prone areas and the systematic assessment of Natech risks. With hardly any Natech risk maps existing within the EU the European Commission's Joint Research Centre has developed a Natech risk analysis and mapping tool called RAPID-N, that estimates the overall risk of natural-hazard impact to industrial installations and its possible consequences. The results are presented as risk summary reports and interactive risk maps which can be used for decision making. Currently, RAPID-N focuses on Natech risk due to earthquakes at industrial installations. However, it will be extended to also analyse and map Natech risk due to floods in the near future. The RAPID-N methodology is based on the estimation of on-site natural hazard parameters, use of fragility curves to determine damage probabilities of plant units for various damage states, and the calculation of spatial extent, severity, and probability of Natech events potentially triggered by the natural hazard. The methodology was implemented as a web-based risk assessment and mapping software tool which allows easy data entry, rapid local or regional risk assessment and mapping. RAPID-N features an innovative property estimation framework to calculate on-site natural hazard parameters, industrial plant and plant unit characteristics, and hazardous substance properties. Custom damage states and fragility curves can be defined for different types of plant units. Conditional relationships can be specified between damage states and Natech risk states, which describe probable Natech event scenarios. Natech consequences are assessed using a custom implementation of U.S. EPA's Risk Management Program (RMP) Guidance for Offsite Consequence Analysis methodology. This custom implementation is based on the property estimation framework and allows the easy modification of model parameters and the substitution of equations with alternatives. RAPID-N can be applied at different stages of the Natech risk management process: It allows on the one hand the analysis of hypothetical Natech scenarios to prevent or prepare for a Natech accident by supporting land-use and emergency planning. On the other hand, once a natural disaster occurs RAPID-N can be used for rapidly locating facilities with potential Natech accident damage based on actual natural-hazard information. This provides a means to warn the population in the vicinity of the facilities in a timely manner. This presentation will introduce the specific features of RAPID-N and show the use of the tool by application to a case-study area.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission.
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G; Neumann, Gregory A; Smith, David E; Zuber, Maria T
2018-01-18
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10 -5 . By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10 -5 and the Sun's gravitational oblateness, [Formula: see text] = (2.246 ± 0.022) × 10 -7 . Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, [Formula: see text] = (-6.13 ± 1.47) × 10 -14 , which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain [Formula: see text] to be <4 × 10 -14 per year.
Li, Ao; Liu, Zongzhi; Lezon-Geyda, Kimberly; Sarkar, Sudipa; Lannin, Donald; Schulz, Vincent; Krop, Ian; Winer, Eric; Harris, Lyndsay; Tuck, David
2011-01-01
There is an increasing interest in using single nucleotide polymorphism (SNP) genotyping arrays for profiling chromosomal rearrangements in tumors, as they allow simultaneous detection of copy number and loss of heterozygosity with high resolution. Critical issues such as signal baseline shift due to aneuploidy, normal cell contamination, and the presence of GC content bias have been reported to dramatically alter SNP array signals and complicate accurate identification of aberrations in cancer genomes. To address these issues, we propose a novel Global Parameter Hidden Markov Model (GPHMM) to unravel tangled genotyping data generated from tumor samples. In contrast to other HMM methods, a distinct feature of GPHMM is that the issues mentioned above are quantitatively modeled by global parameters and integrated within the statistical framework. We developed an efficient EM algorithm for parameter estimation. We evaluated performance on three data sets and show that GPHMM can correctly identify chromosomal aberrations in tumor samples containing as few as 10% cancer cells. Furthermore, we demonstrated that the estimation of global parameters in GPHMM provides information about the biological characteristics of tumor samples and the quality of genotyping signal from SNP array experiments, which is helpful for data quality control and outlier detection in cohort studies. PMID:21398628
Concordance cosmology without dark energy
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Dobos, László; Beck, Róbert; Szapudi, István; Csabai, István
2017-07-01
According to the separate universe conjecture, spherically symmetric sub-regions in an isotropic universe behave like mini-universes with their own cosmological parameters. This is an excellent approximation in both Newtonian and general relativistic theories. We estimate local expansion rates for a large number of such regions, and use a scale parameter calculated from the volume-averaged increments of local scale parameters at each time step in an otherwise standard cosmological N-body simulation. The particle mass, corresponding to a coarse graining scale, is an adjustable parameter. This mean field approximation neglects tidal forces and boundary effects, but it is the first step towards a non-perturbative statistical estimation of the effect of non-linear evolution of structure on the expansion rate. Using our algorithm, a simulation with an initial Ωm = 1 Einstein-de Sitter setting closely tracks the expansion and structure growth history of the Λ cold dark matter (ΛCDM) cosmology. Due to small but characteristic differences, our model can be distinguished from the ΛCDM model by future precision observations. Moreover, our model can resolve the emerging tension between local Hubble constant measurements and the Planck best-fitting cosmology. Further improvements to the simulation are necessary to investigate light propagation and confirm full consistency with cosmic microwave background observations.
Costs and benefits of direct-to-consumer advertising: the case of depression.
Block, Adam E
2007-01-01
Direct-to-consumer advertising (DTCA) is legal in the US and New Zealand, but illegal in the rest of the world. Little or no research exists on the social welfare implications of DTCA. To quantify the total costs and benefits associated with both appropriate and inappropriate care due to DTCA, for the case of depression. A cost-benefit model was developed using parameter estimates from available survey, epidemiological and experimental data. The model estimates the total benefits and costs (year 2002 values) of new appropriate and inappropriate care stimulated by DTCA for depression. Uncertainty in model parameters is addressed with sensitivity analyses. This study provides evidence that 94% of new antidepressant use due to DTCA is from non-depressed individuals. However, the average health benefit to each new depressed user is 63-fold greater than the cost per treatment, creating a positive overall social welfare effect; a net benefit of >72 million US dollars. This analysis suggests that DTCA may lead to antidepressant treatment in 15-fold as many non-depressed people as depressed people. However, the costs of treating non-depressed people may be vastly outweighed by the much larger benefit accruing to treated depressed individuals. The cost-benefit ratio can be improved through better targeting of advertisements and higher quality treatment of depression.
Constraining uncertainties in water supply reliability in a tropical data scarce basin
NASA Astrophysics Data System (ADS)
Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte
2015-04-01
Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.
Impacts of irrigation on groundwater depletion in the North China Plain
NASA Astrophysics Data System (ADS)
Ge, Yuqi; Lei, Huimin
2017-04-01
Groundwater resources is an essential water supply for agriculture in the North China Plain (NCP) which is one of the most important food production areas in China. In the past decades, excessive groundwater-fed irrigation in this area has caused sharp decline in groundwater table. However, accurate monitoring on the net groundwater exploitation is still difficult, mainly due to a lack of complete groundwater exploitation monitoring network. This hinders an accurate evaluation of the effects of agricultural managements on shallow groundwater table. In this study, we use an existing method to estimate the net irrigation amount at the county level, and evaluate the effects of current agricultural management on groundwater depletion. We apply this method in five typical counties in the NCP to estimate annual net irrigation amount from 2002 to 2015, based on meteorological data (2002-2015) and remote sensing ET data (2002-2015) . First, an agro-hydrological model (Soil-Water-Atmosphere-Plant, SWAP) is calibrated and validated at field scale based on the measured data from flux towers. Second, the model is established at reginal scale by spatial discretization. Third, we use an optimization tool (Parameter ESTimation, PEST) to optimize the irrigation parameter in SWAP so as the simulated evapotranspiration (ET) by SWAP is closest to the remote sensing ET. We expect that the simulated irrigation amount from the optimized parameter is the estimated net irrigation amount. Finally, the contribution of agricultural management to the observed groundwater depletion is assessed by calculating the groundwater balance which considers the estimated net irrigation amount, observed lateral groundwater, rainfall recharge, deep seepage, evaporation from phreatic water and domestic water use. The study is expected to give a scientific basis for alleviating the over-exploitation of groundwater resources in the area.
NASA Astrophysics Data System (ADS)
Kromskii, S. D.; Pavlenko, O. V.; Gabsatarova, I. P.
2018-03-01
Based on the Anapa (ANN) seismic station records of 40 earthquakes ( M W > 3.9) that occurred within 300 km of the station since 2002 up to the present time, the source parameters and quality factor of the Earth's crust ( Q( f)) and upper mantle are estimated for the S-waves in the 1-8 Hz frequency band. The regional coda analysis techniques which allow separating the effects associated with seismic source (source effects) and with the propagation path of seismic waves (path effects) are employed. The Q-factor estimates are obtained in the form Q( f) = 90 × f 0.7 for the epicentral distances r < 120 km and in the form Q( f) = 90 × f1.0 for r > 120 km. The established Q( f) and source parameters are close to the estimates for Central Japan, which is probably due to the similar tectonic structure of the regions. The shapes of the source parameters are found to be independent of the magnitude of the earthquakes in the magnitude range 3.9-5.6; however, the radiation of the high-frequency components ( f > 4-5 Hz) is enhanced with the depth of the source (down to h 60 km). The estimates Q( f) of the quality factor determined from the records by the Sochi, Anapa, and Kislovodsk seismic stations allowed a more accurate determination of the seismic moments and magnitudes of the Caucasian earthquakes. The studies will be continued for obtaining the Q( f) estimates, geometrical spreading functions, and frequency-dependent amplification of seismic waves in the Earth's crust in the other regions of the Northern Caucasus.
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.
2013-01-01
The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Estimating the Maximum Magnitude of Induced Earthquakes With Dynamic Rupture Simulations
NASA Astrophysics Data System (ADS)
Gilmour, E.; Daub, E. G.
2017-12-01
Seismicity in Oklahoma has been sharply increasing as the result of wastewater injection. The earthquakes, thought to be induced from changes in pore pressure due to fluid injection, nucleate along existing faults. Induced earthquakes currently dominate central and eastern United States seismicity (Keranen et al. 2016). Induced earthquakes have only been occurring in the central US for a short time; therefore, too few induced earthquakes have been observed in this region to know their maximum magnitude. The lack of knowledge regarding the maximum magnitude of induced earthquakes means that large uncertainties exist in the seismic hazard for the central United States. While induced earthquakes follow the Gutenberg-Richter relation (van der Elst et al. 2016), it is unclear if there are limits to their magnitudes. An estimate of the maximum magnitude of the induced earthquakes is crucial for understanding their impact on seismic hazard. While other estimates of the maximum magnitude exist, those estimates are observational or statistical, and cannot take into account the possibility of larger events that have not yet been observed. Here, we take a physical approach to studying the maximum magnitude based on dynamic ruptures simulations. We run a suite of two-dimensional ruptures simulations to physically determine how ruptures propagate. The simulations use the known parameters of principle stress orientation and rupture locations. We vary the other unknown parameters of the ruptures simulations to obtain a large number of rupture simulation results reflecting different possible sets of parameters, and use these results to train a neural network to complete the ruptures simulations. Then using a Markov Chain Monte Carlo method to check different combinations of parameters, the trained neural network is used to create synthetic magnitude-frequency distributions to compare to the real earthquake catalog. This method allows us to find sets of parameters that are consistent with earthquakes observed in Oklahoma and find which parameters effect the rupture propagation. Our results show that the stress orientation and magnitude, pore pressure, and friction properties combine to determine the final magnitude of the simulated event.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Stress Rupture Life Reliability Measures for Composite Overwrapped Pressure Vessels
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Thesken, John C.; Phoenix, S. Leigh; Grimes-Ledesma, Lorie
2007-01-01
Composite Overwrapped Pressure Vessels (COPVs) are often used for storing pressurant gases onboard spacecraft. Kevlar (DuPont), glass, carbon and other more recent fibers have all been used as overwraps. Due to the fact that overwraps are subjected to sustained loads for an extended period during a mission, stress rupture failure is a major concern. It is therefore important to ascertain the reliability of these vessels by analysis, since the testing of each flight design cannot be completed on a practical time scale. The present paper examines specifically a Weibull statistics based stress rupture model and considers the various uncertainties associated with the model parameters. The paper also examines several reliability estimate measures that would be of use for the purpose of recertification and for qualifying flight worthiness of these vessels. Specifically, deterministic values for a point estimate, mean estimate and 90/95 percent confidence estimates of the reliability are all examined for a typical flight quality vessel under constant stress. The mean and the 90/95 percent confidence estimates are computed using Monte-Carlo simulation techniques by assuming distribution statistics of model parameters based also on simulation and on the available data, especially the sample sizes represented in the data. The data for the stress rupture model are obtained from the Lawrence Livermore National Laboratories (LLNL) stress rupture testing program, carried out for the past 35 years. Deterministic as well as probabilistic sensitivities are examined.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Estimating the cost-effectiveness of vaccination against herpes zoster in England and Wales.
van Hoek, A J; Gay, N; Melegaro, A; Opstelten, W; Edmunds, W J
2009-02-25
A live-attenuated vaccine against herpes zoster (HZ) has been approved for use, on the basis of a large-scale clinical trial that suggests that the vaccine is safe and efficacious. This study uses a Markov cohort model to estimate whether routine vaccination of the elderly (60+) would be cost-effective, when compared with other uses of health care resources. Vaccine efficacy parameters are estimated by fitting a model to clinical trial data. Estimates of QALY losses due to acute HZ and post-herpetic neuralgia were derived by fitting models to data on the duration of pain by severity and the QoL detriment associated with different severity categories, as reported in a number of different studies. Other parameters (such as cost and incidence estimates) were based on the literature, or UK data sources. The results suggest that vaccination of 65 year olds is likely to be cost-effective (base-case ICER=pound20,400 per QALY gained). If the vaccine does offer additional protection against either the severity of disease or the likelihood of developing PHN (as suggested by the clinical trial), then vaccination of all elderly age groups is highly likely to be deemed cost-effective. Vaccination at either 65 or 70 years (depending on assumptions of the vaccine action) is most cost-effective. Including a booster dose at a later age is unlikely to be cost-effective.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
The economic impact of peste des petits ruminants in India.
Bardhan, D; Kumar, S; Anandsekaran, G; Chaudhury, J K; Meraj, M; Singh, R K; Verma, M R; Kumar, D; Kumar P T, N; Ahmed Lone, S; Mishra, V; Mohanty, B S; Korade, N; De, U K
2017-04-01
Peste des petits ruminants (PPR) is an economically important livestock disease which affects a vast section of the small ruminant population in India. However, data on the incidence of PPR are limited and scant literature is available on the economic losses caused by the disease. In the present study, a structured sampling design was adopted, which covered the major agro-climatic regions of the country, to ascertain the morbidity and mortality rates of PPR. Available estimates of the economic losses in India due to various livestock diseases are based on single values of various epidemiological and economic parameters. Stochastic modelling was used to estimate the economic impact of PPR. Overall annual morbidity and mortality rates of PPR for small ruminants in India have been estimated from the sample as being 8%and 3.45%, respectively. The authors have analysed variations in these rates across species, age group, sex, season and region. The expected annual economic loss due to PPR in India ranges from as little as US $2 million to $18 million and may go up to US $1.5 billion; the most likely range of expected economic losses is between US $653 million and $669 million. This study thus reveals significant losses due to the incidence of PPR in small ruminants in India.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings
NASA Astrophysics Data System (ADS)
Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.
2014-12-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).
Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach
NASA Astrophysics Data System (ADS)
Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.
2017-09-01
Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.
Estimating network effect in geocenter motion: Applications
NASA Astrophysics Data System (ADS)
Zannat, Umma Jamila; Tregoning, Paul
2017-10-01
The network effect is the error associated with the subsampling of the Earth surface by space geodetic networks. It is an obstacle toward the precise measurement of geocenter motion, that is, the relative motion between the center of mass of the Earth system and the center of figure of the Earth surface. In a complementary paper, we proposed a theoretical approach to estimate the magnitude of this effect from the displacement fields predicted by geophysical models. Here we evaluate the effectiveness of our estimate for two illustrative physical processes: coseismic displacements inducing instantaneous changes in the Helmert parameters and elastic deformation due to surface water movements causing secular drifts in those parameters. For the first, we consider simplified models of the 2004 Sumatra-Andaman and the 2011 Tōhoku-Oki earthquakes, and for the second, we use the observations of the Gravity Recovery and Climate Experiment, complemented by an ocean model. In both case studies, it is found that the magnitude of the network effect, even for a large global network, is often as large as the magnitude of the changes in the Helmert parameters themselves. However, we also show that our proposed modification to the definition of the center of network frame to include weights proportional to the area of the Earth surface that the stations represent can significantly reduce the network effect in most cases.
Rodríguez, Javier; Navallas, Javier; Gila, Luis; Dimitrova, Nonna Alexandrovna; Malanda, Armando
2011-04-30
In situ recording of the intracellular action potential (IAP) of human muscle fibres is not yet possible, and consequently, knowledge concerning certain IAP characteristics is still limited. According to the core-conductor theory, close to a fibre, a single fibre action potential (SFAP) can be assumed to be proportional to the IAP second derivative. Thus, we might expect to be able to derive some characteristics of the IAP, such as the duration of its spike, from the SFAP waveform. However, SFAP properties not only depend on the IAP shape but also on the fibre-to-electrode (radial) distance and other physiological properties of the fibre. In this paper we, first, propose an SFAP parameter (the negative phase duration, NPD) appropriate for estimating the IAP spike duration and, second, show that this parameter is largely independent of changes in radial distance and muscle fibre propagation velocity. Estimation of the IAP spike duration from a direct measurement taken from the SFAP waveform provides a possible way to enhance the accuracy of SFAP models. Because IAP spike duration is known to be sensitive to the effects of fatigue and calcium accumulation, the proposed SFAP parameter, the NPD, has potential value in electrodiagnosis and as an indicator of IAP profile changes due to peripheral fatigue. Copyright © 2011 Elsevier B.V. All rights reserved.
Ha, Hojin; Hwang, Dongha; Kim, Guk Bae; Kweon, Jihoon; Lee, Sang Joon; Baek, Jehyun; Kim, Young-Hak; Kim, Namkug; Yang, Dong Hyun
2016-07-01
Quantifying turbulence velocity fluctuation is important because it indicates the fluid energy dissipation of the blood flow, which is closely related to the pressure drop along the blood vessel. This study aims to evaluate the effects of scan parameters and the target vessel size of 4D phase-contrast (PC)-MRI on quantification of turbulent kinetic energy (TKE). Comprehensive 4D PC-MRI measurements with various velocity-encoding (VENC), echo time (TE), and voxel size values were carried out to estimate TKE distribution in stenotic flow. The total TKE (TKEsum), maximum TKE (TKEmax), and background noise level (TKEnoise) were compared for each scan parameter. The feasibility of TKE estimation in small vessels was also investigated. Results show that the optimum VENC for stenotic flow with a peak velocity of 125cm/s was 70cm/s. Higher VENC values overestimated the TKEsum by up to six-fold due to increased TKEnoise, whereas lower VENC values (30cm/s) underestimated it by 57.1%. TE and voxel size did not significantly influence the TKEsum and TKEnoise, although the TKEmax significantly increased as the voxel size increased. TKE quantification in small-sized vessels (3-5-mm diameter) was feasible unless high-velocity turbulence caused severe phase dispersion in the reference image. Copyright © 2016 Elsevier Inc. All rights reserved.
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Flood characteristics of urban watersheds in the United States
Sauer, Vernon B.; Thomas, W.O.; Stricker, V.A.; Wilson, K.V.
1983-01-01
A nationwide study of flood magnitude and frequency in urban areas was made for the purpose of reviewing available literature, compiling an urban flood data base, and developing methods of estimating urban floodflow characteristics in ungaged areas. The literature review contains synopses of 128 recent publications related to urban floodflow. A data base of 269 gaged basins in 56 cities and 31 States, including Hawaii, contains a wide variety of topographic and climatic characteristics, land-use variables, indices of urbanization, and flood-frequency estimates. Three sets of regression equations were developed to estimate flood discharges for ungaged sites for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. Two sets of regression equations are based on seven independent parameters and the third is based on three independent parameters. The only difference in the two sets of seven-parameter equations is the use of basin lag time in one and lake and reservoir storage in the other. Of primary importance in these equations is an independent estimate of the equivalent rural discharge for the ungaged basin. The equations adjust the equivalent rural discharge to an urban condition. The primary adjustment factor, or index of urbanization, is the basin development factor, a measure of the extent of development of the drainage system in the basin. This measure includes evaluations of storm drains (sewers), channel improvements, and curb-and-gutter streets. The basin development factor is statistically very significant and offers a simple and effective way of accounting for drainage development and runoff response in urban areas. Percentage of impervious area is also included in the seven-parameter equations as an additional measure of urbanization and apparently accounts for increased runoff volumes. This factor is not highly significant for large floods, which supports the generally held concept that imperviousness is not a dominant factor when soils become more saturated during large storms. Other parameters in the seven-parameter equations include drainage area size, channel slope, rainfall intensity, lake and reservoir storage, and basin lag time. These factors are all statistically significant and provide logical indices of basin conditions. The three-parameter equations include only the three most significant parameters: rural discharge, basin-development factor, and drainage area size. All three sets of regression equations provide unbiased estimates of urban flood frequency. The seven-parameter regression equations without basin lag time have average standard errors of regression varying from ? 37 percent for the 5-year flood to ? 44 percent for the 100-year flood and ? 49 percent for the 500-year flood. The other two sets of regression equations have similar accuracy. Several tests for bias, sensitivity, and hydrologic consistency are included which support the conclusion that the equations are useful throughout the United States. All estimating equations were developed from data collected on drainage basins where temporary in-channel storage, due to highway embankments, was not significant. Consequently, estimates made with these equations do not account for the reducing effect of this temporary detention storage.
Simulated performance of an order statistic threshold strategy for detection of narrowband signals
NASA Technical Reports Server (NTRS)
Satorius, E.; Brady, R.; Deich, W.; Gulkis, S.; Olsen, E.
1988-01-01
The application of order statistics to signal detection is becoming an increasingly active area of research. This is due to the inherent robustness of rank estimators in the presence of large outliers that would significantly degrade more conventional mean-level-based detection systems. A detection strategy is presented in which the threshold estimate is obtained using order statistics. The performance of this algorithm in the presence of simulated interference and broadband noise is evaluated. In this way, the robustness of the proposed strategy in the presence of the interference can be fully assessed as a function of the interference, noise, and detector parameters.
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.
2017-12-01
A precise estimation of groundwater fluctuation is studied by considering delayed recharge flux (DRF) and unsaturated zone drainage (UZD). Both DRF and UZD are due to gravitational flow impeded in the unsaturated zone, which may nonnegligibly affect groundwater level changes. In the validation, a previous model without the consideration of unsaturated flow is benchmarked where the actual groundwater level and precipitation data are divided into three periods based on the climatic condition. The estimation capability of the new model is superior to the benchmarked model as indicated by the significantly improved representation of groundwater level with physically interpretable model parameters.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.