A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2017-01-01
This article provides a detailed description of three factors (specification of the ability distribution, numerical integration, and frame of reference for the item parameter estimates) that might affect the item parameter estimation of the three-parameter logistic model, and compares five item calibration methods, which are combinations of the…
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)
NASA Astrophysics Data System (ADS)
Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.
2017-12-01
The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.
Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong
2013-09-01
This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1976-01-01
Major activities included coding and verifying equations of motion for the earth-moon system. Some attention was also given to numerical integration methods and parameter estimation methods. Existing analytical theories such as Brown's lunar theory, Eckhardt's theory for lunar rotation, and Newcomb's theory for the rotation of the earth were coded and verified. These theories serve as checks for the numerical integration. Laser ranging data for the period January 1969 - December 1975 was collected and stored on tape. The main goal of this research is the development of software to enable physical parameters of the earth-moon system to be estimated making use of data available from the Lunar Laser Ranging Experiment and the Very Long Base Interferometry experiment of project Apollo. A more specific goal is to develop software for the estimation of certain physical parameters of the moon such as inertia ratios, and the third and fourth harmonic gravity coefficients.
Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey
NASA Technical Reports Server (NTRS)
Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.
2004-01-01
This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-12-01
Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.
Computational Algorithms or Identification of Distributed Parameter Systems
1993-04-24
delay-differential equations, Volterra integral equations, and partial differential equations with memory terms . In particular we investigated a...tested for estimating parameters in a Volterra integral equation arising from a viscoelastic model of a flexible structure with Boltzmann damping. In...particular, one of the parameters identified was the order of the derivative in Volterra integro-differential equations containing fractional
ERIC Educational Resources Information Center
Karkee, Thakur B.; Wright, Karen R.
2004-01-01
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2014-03-01
The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
USE OF CONTINUOUS MEASUREMENTS OF INTEGRAL AEROSOL PARAMETERS TO ESTIMATE PARTICLE SURFACE AREA
This study was undertaken because of interest in using particle surface area as an indicator for studies of the health effects of particulate matter. First, we wished to determine the integral parameter of the size distribution measured by the electrical aerosol detector. Secon...
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
NASA Astrophysics Data System (ADS)
Norton, Andrew S.
An integral component of managing game species is an understanding of population dynamics and relative abundance. Harvest data are frequently used to estimate abundance of white-tailed deer. Unless harvest age-structure is representative of the population age-structure and harvest vulnerability remains constant from year to year, these data alone are of limited value. Additional model structure and auxiliary information has accommodated this shortcoming. Specifically, integrated age-at-harvest (AAH) state-space population models can formally combine multiple sources of data, and regularization via hierarchical model structure can increase flexibility of model parameters. I collected known fates data, which I evaluated and used to inform trends in survival parameters for an integrated AAH model. I used temperature and snow depth covariates to predict survival outside of the hunting season, and opening weekend temperature and percent of corn harvest covariates to predict hunting season survival. When auxiliary empirical data were unavailable for the AAH model, moderately informative priors provided sufficient information for convergence and parameter estimates. The AAH model was most sensitive to errors in initial abundance, but this error was calibrated after 3 years. Among vital rates, the AAH model was most sensitive to reporting rates (percentage of mortality during the hunting season related to harvest). The AAH model, using only harvest data, was able to track changing abundance trends due to changes in survival rates even when prior models did not inform these changes (i.e. prior models were constant when truth varied). I also compared AAH model results with estimates from the Wisconsin Department of Natural Resources (WIDNR). Trends in abundance estimates from both models were similar, although AAH model predictions were systematically higher than WIDNR estimates in the East study area. When I incorporated auxiliary information (i.e. integrated AAH model) about survival outside the hunting season from known fates data, predicted trends appeared more closely related to what was expected. Disagreements between the AAH model and WIDNR estimates in the East were likely related to biased predictions for reporting and survival rates from the AAH model.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
NASA Astrophysics Data System (ADS)
Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.
2015-07-01
We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Sample Size Estimation: The Easy Way
ERIC Educational Resources Information Center
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
Joint parameter and state estimation algorithms for real-time traffic monitoring.
DOT National Transportation Integrated Search
2013-12-01
A common approach to traffic monitoring is to combine a macroscopic traffic flow model with traffic sensor data in a process called state estimation, data fusion, or data assimilation. The main challenge of traffic state estimation is the integration...
Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J
2016-02-01
A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Integrated computational model of the bioenergetics of isolated lung mitochondria
Zhang, Xiao; Jacobs, Elizabeth R.; Camara, Amadou K. S.; Clough, Anne V.
2018-01-01
Integrated computational modeling provides a mechanistic and quantitative framework for describing lung mitochondrial bioenergetics. Thus, the objective of this study was to develop and validate a thermodynamically-constrained integrated computational model of the bioenergetics of isolated lung mitochondria. The model incorporates the major biochemical reactions and transport processes in lung mitochondria. A general framework was developed to model those biochemical reactions and transport processes. Intrinsic model parameters such as binding constants were estimated using previously published isolated enzymes and transporters kinetic data. Extrinsic model parameters such as maximal reaction and transport velocities were estimated by fitting the integrated bioenergetics model to published and new tricarboxylic acid cycle and respirometry data measured in isolated rat lung mitochondria. The integrated model was then validated by assessing its ability to predict experimental data not used for the estimation of the extrinsic model parameters. For example, the model was able to predict reasonably well the substrate and temperature dependency of mitochondrial oxygen consumption, kinetics of NADH redox status, and the kinetics of mitochondrial accumulation of the cationic dye rhodamine 123, driven by mitochondrial membrane potential, under different respiratory states. The latter required the coupling of the integrated bioenergetics model to a pharmacokinetic model for the mitochondrial uptake of rhodamine 123 from buffer. The integrated bioenergetics model provides a mechanistic and quantitative framework for 1) integrating experimental data from isolated lung mitochondria under diverse experimental conditions, and 2) assessing the impact of a change in one or more mitochondrial processes on overall lung mitochondrial bioenergetics. In addition, the model provides important insights into the bioenergetics and respiration of lung mitochondria and how they differ from those of mitochondria from other organs. To the best of our knowledge, this model is the first for the bioenergetics of isolated lung mitochondria. PMID:29889855
Integrated computational model of the bioenergetics of isolated lung mitochondria.
Zhang, Xiao; Dash, Ranjan K; Jacobs, Elizabeth R; Camara, Amadou K S; Clough, Anne V; Audi, Said H
2018-01-01
Integrated computational modeling provides a mechanistic and quantitative framework for describing lung mitochondrial bioenergetics. Thus, the objective of this study was to develop and validate a thermodynamically-constrained integrated computational model of the bioenergetics of isolated lung mitochondria. The model incorporates the major biochemical reactions and transport processes in lung mitochondria. A general framework was developed to model those biochemical reactions and transport processes. Intrinsic model parameters such as binding constants were estimated using previously published isolated enzymes and transporters kinetic data. Extrinsic model parameters such as maximal reaction and transport velocities were estimated by fitting the integrated bioenergetics model to published and new tricarboxylic acid cycle and respirometry data measured in isolated rat lung mitochondria. The integrated model was then validated by assessing its ability to predict experimental data not used for the estimation of the extrinsic model parameters. For example, the model was able to predict reasonably well the substrate and temperature dependency of mitochondrial oxygen consumption, kinetics of NADH redox status, and the kinetics of mitochondrial accumulation of the cationic dye rhodamine 123, driven by mitochondrial membrane potential, under different respiratory states. The latter required the coupling of the integrated bioenergetics model to a pharmacokinetic model for the mitochondrial uptake of rhodamine 123 from buffer. The integrated bioenergetics model provides a mechanistic and quantitative framework for 1) integrating experimental data from isolated lung mitochondria under diverse experimental conditions, and 2) assessing the impact of a change in one or more mitochondrial processes on overall lung mitochondrial bioenergetics. In addition, the model provides important insights into the bioenergetics and respiration of lung mitochondria and how they differ from those of mitochondria from other organs. To the best of our knowledge, this model is the first for the bioenergetics of isolated lung mitochondria.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
Model-data integration to improve the LPJmL dynamic global vegetation model
NASA Astrophysics Data System (ADS)
Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno
2017-04-01
Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the simulated ecosystem dynamics which consequently changed the development of ecosystem carbon stocks and fluxes under future climate and CO2 change. In summary, our results demonstrate challenges and the potential of using model-data integration approaches to improve a dynamic global vegetation model.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
NASA Technical Reports Server (NTRS)
Allison, D. E.
1984-01-01
A model is developed for the estimation of the surface fluxes of momentum, heat, and moisture of the cloud topped marine atmospheric boundary layer by use of satellite remotely sensed parameters. The parameters chosen for the problem are the integrated liquid water content, q sub li, the integrated water vapor content, q sub vi, the cloud top temperature, and either a measure of the 10 meter neutral wind speed or the friction velocity at the surface. Under the assumption of a horizontally homogeneous, well-mixed boundary layer, the model calculates the equivalent potential temperature and total water profiles of the boundary layer along with the boundary layer height from inputs of q sub li, q sub vi, and cloud top temperature. These values, along with the 10m neutral wind speed or friction velocity and the sea surface temperature are then used to estimate the surface fluxes. The development of a scheme to parameterize the integrated water vapor outside of the boundary layer for the cases of cold air outbreak and California coastal stratus is presented.
Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing
2009-06-01
In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
High-Speed Quantum Key Distribution Using Photonic Integrated Circuits
2013-01-01
protocol [14] that uses energy-time entanglement of pairs of photons. We are employing the QPIC architecture to implement a novel high-dimensional disper...continuous Hilbert spaces using measures of the covariance matrix. Although we focus the discussion on a scheme employing entangled photon pairs...is the probability that parameter estimation fails [20]. The parameter ε̄ accounts for the accuracy of estimating the smooth min- entropy , which
Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
2007-06-05
From - To) 05-06-2007 Technical Paper 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER An Inversion Method for Reconstructing Hall Thruster Plume...239.18 An Inversion Method for Reconstructing Hall Thruster Plume Parameters from Line Integrated Measurements (Preprint) Taylor S. Matlock∗ Jackson...dimensional estimate of the plume electron temperature using a published xenon collisional radiative model. I. Introduction The Hall thruster is a high
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Properties of the two-dimensional heterogeneous Lennard-Jones dimers: An integral equation study
Urbic, Tomaz
2016-01-01
Structural and thermodynamic properties of a planar heterogeneous soft dumbbell fluid are examined using Monte Carlo simulations and integral equation theory. Lennard-Jones particles of different sizes are the building blocks of the dimers. The site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions and the thermodynamic properties. Obtained results are compared to Monte Carlo simulation data. The critical parameters for selected types of dimers were also estimated and the influence of the Lennard-Jones parameters was studied. We have also tested the correctness of the site-site integral equation theory using different closures. PMID:27875894
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Reproducibility of isopach data and estimates of dispersal and eruption volumes
NASA Astrophysics Data System (ADS)
Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.
2012-12-01
Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.
Calibration of a COTS Integration Cost Model Using Local Project Data
NASA Technical Reports Server (NTRS)
Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David
1997-01-01
The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
Parameter identification for nonlinear aerodynamic systems
NASA Technical Reports Server (NTRS)
Pearson, Allan E.
1990-01-01
Parameter identification for nonlinear aerodynamic systems is examined. It is presumed that the underlying model can be arranged into an input/output (I/O) differential operator equation of a generic form. The algorithm estimation is especially efficient since the equation error can be integrated exactly given any I/O pair to obtain an algebraic function of the parameters. The algorithm for parameter identification was extended to the order determination problem for linear differential system. The degeneracy in a least squares estimate caused by feedback was addressed. A method of frequency analysis for determining the transfer function G(j omega) from transient I/O data was formulated using complex valued Fourier based modulating functions in contrast with the trigonometric modulating functions for the parameter estimation problem. A simulation result of applying the algorithm is given under noise-free conditions for a system with a low pass transfer function.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Significant wave heights from Sentinel-1 SAR: Validation and applications
NASA Astrophysics Data System (ADS)
Stopa, J. E.; Mouche, A.
2017-03-01
Two empirical algorithms are developed for wave mode images measured from the synthetic aperture radar aboard Sentinel-1 A. The first method, called CWAVE_S1A, is an extension of previous efforts developed for ERS2 and the second method, called Fnn, uses the azimuth cutoff among other parameters to estimate significant wave heights (Hs) and average wave periods without using a modulation transfer function. Neural networks are trained using colocated data generated from WAVEWATCH III and independently verified with data from altimeters and in situ buoys. We use neural networks to relate the nonlinear relationships between the input SAR image parameters and output geophysical wave parameters. CWAVE_S1A performs well and has reduced precision compared to Fnn with Hs root mean square errors within 0.5 and 0.6 m, respectively. The developed neural networks extend the SAR's ability to retrieve useful wave information under a large range of environmental conditions including extratropical and tropical cyclones in which Hs estimation is traditionally challenging.
FRAMES-2.0 Software System: Frames 2.0 Pest Integration (F2PEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castleton, Karl J.; Meyer, Philip D.
2009-06-17
The implementation of the FRAMES 2.0 F2PEST module is described, including requirements, design, and specifications of the software. This module integrates the PEST parameter estimation software within the FRAMES 2.0 environmental modeling framework. A test case is presented.
Transfer-function-parameter estimation from frequency response data: A FORTRAN program
NASA Technical Reports Server (NTRS)
Seidel, R. C.
1975-01-01
A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
NASA Astrophysics Data System (ADS)
Mitishita, E.; Costa, F.; Martins, M.
2017-05-01
Photogrammetric and Lidar datasets should be in the same mapping or geodetic frame to be used simultaneously in an engineering project. Nowadays direct sensor orientation is a common procedure used in simultaneous photogrammetric and Lidar surveys. Although the direct sensor orientation technologies provide a high degree of automation process due to the GNSS/INS technologies, the accuracies of the results obtained from the photogrammetric and Lidar surveys are dependent on the quality of a group of parameters that models accurately the user conditions of the system at the moment the job is performed. This paper shows the study that was performed to verify the importance of the in situ camera calibration and Integrated Sensor Orientation without control points to increase the accuracies of the photogrammetric and LIDAR datasets integration. The horizontal and vertical accuracies of photogrammetric and Lidar datasets integration by photogrammetric procedure improved significantly when the Integrated Sensor Orientation (ISO) approach was performed using Interior Orientation Parameter (IOP) values estimated from the in situ camera calibration. The horizontal and vertical accuracies, estimated by the Root Mean Square Error (RMSE) of the 3D discrepancies from the Lidar check points, increased around of 37% and 198% respectively.
Extended Kalman Filter framework for forecasting shoreline evolution
Long, Joseph; Plant, Nathaniel G.
2012-01-01
A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urbic, Tomaz, E-mail: tomaz.urbic@fkkt.uni-lj.si; Dias, Cristiano L.
The thermodynamic and structural properties of the planar soft-sites dumbbell fluid are examined by Monte Carlo simulations and integral equation theory. The dimers are built of two Lennard-Jones segments. Site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions for a range of elongations and densities and the results are compared with Monte Carlo simulations. The critical parameters for selected types of dimers were also estimated. We analyze the influence of the bond length on critical point as well as tested correctness of site-site integral equation theory with different closures. The integral equations canmore » be used to predict the phase diagram of dimers whose molecular parameters are known.« less
Modelling topographic potential for erosion and deposition using GIS
Helena Mitasova; Louis R. Iverson
1996-01-01
Modelling of erosion and deposition in complex terrain within a geographical information system (GIS) requires a high resolution digital elevation model (DEM), reliable estimation of topographic parameters, and formulation of erosion models adequate for digital representation of spatially distributed parameters. Regularized spline with tension was integrated within a...
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
ERIC Educational Resources Information Center
Byram, Jessica N.; Seifert, Mark F.; Brooks, William S.; Fraser-Cotlin, Laura; Thorp, Laura E.; Williams, James M.; Wilson, Adam B.
2017-01-01
With integrated curricula and multidisciplinary assessments becoming more prevalent in medical education, there is a continued need for educational research to explore the advantages, consequences, and challenges of integration practices. This retrospective analysis investigated the number of items needed to reliably assess anatomical knowledge in…
Stability and delay sensitivity of neutral fractional-delay systems.
Xu, Qi; Shi, Min; Wang, Zaihua
2016-08-01
This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).
Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene
2010-01-01
Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus
2013-04-01
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.
A robust nonlinear position observer for synchronous motors with relaxed excitation conditions
NASA Astrophysics Data System (ADS)
Bobtsov, Alexey; Bazylev, Dmitry; Pyrkin, Anton; Aranovskiy, Stanislav; Ortega, Romeo
2017-04-01
A robust, nonlinear and globally convergent rotor position observer for surface-mounted permanent magnet synchronous motors was recently proposed by the authors. The key feature of this observer is that it requires only the knowledge of the motor's resistance and inductance. Using some particular properties of the mathematical model it is shown that the problem of state observation can be translated into one of estimation of two constant parameters, which is carried out with a standard gradient algorithm. In this work, we propose to replace this estimator with a new one called dynamic regressor extension and mixing, which has the following advantages with respect to gradient estimators: (1) the stringent persistence of excitation (PE) condition of the regressor is not necessary to ensure parameter convergence; (2) the latter is guaranteed requiring instead a non-square-integrability condition that has a clear physical meaning in terms of signal energy; (3) if the regressor is PE, the new observer (like the old one) ensures convergence is exponential, entailing some robustness properties to the observer; (4) the new estimator includes an additional filter that constitutes an additional degree of freedom to satisfy the non-square integrability condition. Realistic simulation results show significant performance improvement of the position observer using the new parameter estimator, with a less oscillatory behaviour and a faster convergence speed.
NASA Astrophysics Data System (ADS)
Xu, Zheyao; Qi, Naiming; Chen, Yukun
2015-12-01
Spacecraft simulators are widely used to study the dynamics, guidance, navigation, and control of a spacecraft on the ground. A spacecraft simulator can have three rotational degrees of freedom by using a spherical air-bearing to simulate a frictionless and micro-gravity space environment. The moment of inertia and center of mass are essential for control system design of ground-based three-axis spacecraft simulators. Unfortunately, they cannot be known precisely. This paper presents two approaches, i.e. a recursive least-squares (RLS) approach with tracking differentiator (TD) and Extended Kalman Filter (EKF) method, to estimate inertia parameters. The tracking differentiator (TD) filter the noise coupled with the measured signals and generate derivate of the measured signals. Combination of two TD filters in series obtains the angular accelerations that are required in RLS (TD-TD-RLS). Another method that does not need to estimate the angular accelerations is using the integrated form of dynamics equation. An extended TD (ETD) filter which can also generate the integration of the function of signals is presented for RLS (denoted as ETD-RLS). States and inertia parameters are estimated simultaneously using EKF. The observability is analyzed. All proposed methods are illustrated by simulations and experiments.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM
NASA Astrophysics Data System (ADS)
Sheng, Hanlin; Zhang, Tianhong
2017-08-01
In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.
Estimating Software Effort Hours for Major Defense Acquisition Programs
ERIC Educational Resources Information Center
Wallshein, Corinne C.
2010-01-01
Software Cost Estimation (SCE) uses labor hours or effort required to conceptualize, develop, integrate, test, field, or maintain program components. Department of Defense (DoD) SCE can use initial software data parameters to project effort hours for large, software-intensive programs for contractors reporting the top levels of process maturity,…
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Estimating Tropical Cyclone Surface Wind Field Parameters with the CYGNSS Constellation
NASA Astrophysics Data System (ADS)
Morris, M.; Ruf, C. S.
2016-12-01
A variety of parameters can be used to describe the wind field of a tropical cyclone (TC). Of particular interest to the TC forecasting and research community are the maximum sustained wind speed (VMAX), radius of maximum wind (RMW), 34-, 50-, and 64-kt wind radii, and integrated kinetic energy (IKE). The RMW is the distance separating the storm center and the VMAX position. IKE integrates the square of surface wind speed over the entire storm. These wind field parameters can be estimated from observations made by the Cyclone Global Navigation Satellite System (CYGNSS) constellation. The CYGNSS constellation consists of eight small satellites in a 35-degree inclination circular orbit. These satellites will be operating in standard science mode by the 2017 Atlantic TC season. CYGNSS will provide estimates of ocean surface wind speed under all precipitating conditions with high temporal and spatial sampling in the tropics. TC wind field data products can be derived from the level-2 CYGNSS wind speed product. CYGNSS-based TC wind field science data products are developed and tested in this paper. Performance of these products is validated using a mission simulator prelaunch.
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
Application of parameter estimation to aircraft stability and control: The output-error approach
NASA Technical Reports Server (NTRS)
Maine, Richard E.; Iliff, Kenneth W.
1986-01-01
The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
JaeHwa Koh; DuckJoo Yoon; Chang H. Oh
2010-07-01
An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.
Scale-Dependent Solute Dispersion in Variably Saturated Porous Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockhold, Mark L.; Zhang, Z. F.; Bott, Yi-Ju
2016-03-29
This work was performed to support performance assessment (PA) calculations for the Integrated Disposal Facility (IDF) at the Hanford Site. PA calculations require defensible estimates of physical, hydraulic, and transport parameters to simulate subsurface water flow and contaminant transport in both the near- and far-field environments. Dispersivity is one of the required transport parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Peiffer, Loic; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher (1984, Geochim. Cosmichim. Acta 46 513–528) into a stand-alone computer program, to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization using existing parameter estimationmore » software, such as iTOUGH2, PEST, or UCODE. This integrated geothermometry approach presents advantages over classical geothermometers for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss.« less
NASA Astrophysics Data System (ADS)
Barón-Aznar, C.; Moreno-Jiménez, S.; Celis, M. A.; Lárraga-Gutiérrez, J. M.; Ballesteros-Zebadúa, P.
2008-08-01
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScansoftware, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
Static shape control for flexible structures
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
An integrated methodology is described for defining static shape control laws for large flexible structures. The techniques include modeling, identifying and estimating the control laws of distributed systems characterized in terms of infinite dimensional state and parameter spaces. The models are expressed as interconnected elliptic partial differential equations governing a range of static loads, with the capability of analyzing electromagnetic fields around antenna systems. A second-order analysis is carried out for statistical errors, and model parameters are determined by maximizing an appropriate defined likelihood functional which adjusts the model to observational data. The parameter estimates are derived from the conditional mean of the observational data, resulting in a least squares superposition of shape functions obtained from the structural model.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
NASA Astrophysics Data System (ADS)
Polack, J. K.; Flaska, M.; Enqvist, A.; Sosa, C. S.; Lawrence, C. C.; Pozzi, S. A.
2015-09-01
Organic scintillators are frequently used for measurements that require sensitivity to both photons and fast neutrons because of their pulse shape discrimination capabilities. In these measurement scenarios, particle identification is commonly handled using the charge-integration pulse shape discrimination method. This method works particularly well for high-energy depositions, but is prone to misclassification for relatively low-energy depositions. A novel algorithm has been developed for automatically performing charge-integration pulse shape discrimination in a consistent and repeatable manner. The algorithm is able to estimate the photon and neutron misclassification corresponding to the calculated discrimination parameters, and is capable of doing so using only the information measured by a single organic scintillator. This paper describes the algorithm and assesses its performance by comparing algorithm-estimated misclassification to values computed via a more traditional time-of-flight estimation. A single data set was processed using four different low-energy thresholds: 40, 60, 90, and 120 keVee. Overall, the results compared well between the two methods; in most cases, the algorithm-estimated values fell within the uncertainties of the TOF-estimated values.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
NASA Technical Reports Server (NTRS)
Oshman, Yaakov; Markley, Landis
1998-01-01
A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Design and parameter estimation of hybrid magnetic bearings for blood pump applications
NASA Astrophysics Data System (ADS)
Lim, Tau Meng; Zhang, Dongsheng; Yang, Juanjuan; Cheng, Shanbao; Low, Sze Hsien; Chua, Leok Poh; Wu, Xiaowei
2009-10-01
This paper discusses the design and parameter estimation of the dynamics characteristics of a high-speed hybrid magnetic bearings (HMBs) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet (PM) brushless and sensorless DC motor. It is levitated by two HMBs at both ends in five-degree-of-freedom with proportional-integral-derivative (PID) controllers; among which four radial directions are actively controlled and one axial direction is passively controlled. Test results show that the rotor can be stably supported to speeds of 14,000 rpm. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMBs system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air—in both the radial and axial directions. The radial stiffness of the HMBs is compared to the Ansoft's Maxwell 2D/3D finite element magnetostatic results. Experimental estimation showed that the dynamics characteristics of the HMBs system are dominated by the frequency-dependent stiffness coefficients. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamics properties under normal operating conditions with fluid.
Direct Sensor Orientation of a Land-Based Mobile Mapping System
Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua
2011-01-01
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan
2014-09-01
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.
Use of the Kalman Filter for Aortic Pressure Waveform Noise Reduction
Lu, Hsiang-Wei; Wu, Chung-Che; Aliyazicioglu, Zekeriya; Kang, James S.
2017-01-01
Clinical applications that require extraction and interpretation of physiological signals or waveforms are susceptible to corruption by noise or artifacts. Real-time hemodynamic monitoring systems are important for clinicians to assess the hemodynamic stability of surgical or intensive care patients by interpreting hemodynamic parameters generated by an analysis of aortic blood pressure (ABP) waveform measurements. Since hemodynamic parameter estimation algorithms often detect events and features from measured ABP waveforms to generate hemodynamic parameters, noise and artifacts integrated into ABP waveforms can severely distort the interpretation of hemodynamic parameters by hemodynamic algorithms. In this article, we propose the use of the Kalman filter and the 4-element Windkessel model with static parameters, arterial compliance C, peripheral resistance R, aortic impedance r, and the inertia of blood L, to represent aortic circulation for generating accurate estimations of ABP waveforms through noise and artifact reduction. Results show the Kalman filter could very effectively eliminate noise and generate a good estimation from the noisy ABP waveform based on the past state history. The power spectrum of the measured ABP waveform and the synthesized ABP waveform shows two similar harmonic frequencies. PMID:28611850
Demography of the Pacific walrus (Odobenus rosmarus divergens): 1974-2006
Taylor, Rebecca L.; Udevitz, Mark S.
2015-01-01
Global climate change may fundamentally alter population dynamics of many species for which baseline population parameter estimates are imprecise or lacking. Historically, the Pacific walrus is thought to have been limited by harvest, but it may become limited by global warming-induced reductions in sea ice. Loss of sea ice, on which walruses rest between foraging bouts, may reduce access to food, thus lowering vital rates. Rigorous walrus survival rate estimates do not exist, and other population parameter estimates are out of date or have well-documented bias and imprecision. To provide useful population parameter estimates we developed a Bayesian, hidden process demographic model of walrus population dynamics from 1974 through 2006 that combined annual age-specific harvest estimates with five population size estimates, six standing age structure estimates, and two reproductive rate estimates. Median density independent natural survival was high for juveniles (0.97) and adults (0.99), and annual density dependent vital rates rose from 0.06 to 0.11 for reproduction, 0.31 to 0.59 for survival of neonatal calves, and 0.39 to 0.85 for survival of older calves, concomitant with a population decline. This integrated population model provides a baseline for estimating changing population dynamics resulting from changing harvests or sea ice.
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
NASA Astrophysics Data System (ADS)
Van Zeebroeck, M.; Tijskens, E.; Van Liedekerke, P.; Deli, V.; De Baerdemaeker, J.; Ramon, H.
2003-09-01
A pendulum device has been developed to measure contact force, displacement and displacement rate of an impactor during its impact on the sample. Displacement, classically measured by double integration of an accelerometer, was determined in an alternative way using a more accurate incremental optical encoder. The parameters of the Kuwabara-Kono contact force model for impact of spheres have been estimated using an optimization method, taking the experimentally measured displacement, displacement rate and contact force into account. The accuracy of the method was verified using a rubber ball. Contact force parameters for the Kuwabara-Kono model have been estimated with success for three biological materials, i.e., apples, tomatoes and potatoes. The variability in the parameter estimations for the biological materials was quite high and can be explained by geometric differences (radius of curvature) and by biological variation of mechanical tissue properties.
Wang, Yong
2015-01-01
A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) signals after motion compensation. Then, the modified version of Chirplet decomposition (MCD) based on the integrated high order ambiguity function (IHAF) is presented for the parameter estimation of AM-FM signals, and the corresponding high quality instantaneous ISAR images can be obtained from the estimated parameters. Compared with the MCD algorithm based on the generalized cubic phase function (GCPF) in the authors’ previous paper, the novel algorithm presented in this paper is more accurate and efficient, and the results with simulated and real data demonstrate the superiority of the proposed method. PMID:25806870
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.
2008-08-11
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
Kim, Hyoungkyu; Hudetz, Anthony G.; Lee, Joseph; Mashour, George A.; Lee, UnCheol; Avidan, Michael S.
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain. PMID:29503611
Kim, Hyoungkyu; Hudetz, Anthony G; Lee, Joseph; Mashour, George A; Lee, UnCheol
2018-01-01
The integrated information theory (IIT) proposes a quantitative measure, denoted as Φ, of the amount of integrated information in a physical system, which is postulated to have an identity relationship with consciousness. IIT predicts that the value of Φ estimated from brain activities represents the level of consciousness across phylogeny and functional states. Practical limitations, such as the explosive computational demands required to estimate Φ for real systems, have hindered its application to the brain and raised questions about the utility of IIT in general. To achieve practical relevance for studying the human brain, it will be beneficial to establish the reliable estimation of Φ from multichannel electroencephalogram (EEG) and define the relationship of Φ to EEG properties conventionally used to define states of consciousness. In this study, we introduce a practical method to estimate Φ from high-density (128-channel) EEG and determine the contribution of each channel to Φ. We examine the correlation of power, frequency, functional connectivity, and modularity of EEG with regional Φ in various states of consciousness as modulated by diverse anesthetics. We find that our approximation of Φ alone is insufficient to discriminate certain states of anesthesia. However, a multi-dimensional parameter space extended by four parameters related to Φ and EEG connectivity is able to differentiate all states of consciousness. The association of Φ with EEG connectivity during clinically defined anesthetic states represents a new practical approach to the application of IIT, which may be used to characterize various physiological (sleep), pharmacological (anesthesia), and pathological (coma) states of consciousness in the human brain.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 2
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1989-01-01
The Control/Structures Integration Program, a survey of available software for control of flexible structures, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software are discussed.
The presentation shows how a multi-objective optimization method is integrated into a transport simulator (MT3D) for estimating parameters and cost of in-situ bioremediation technology to treat perchlorate-contaminated groundwater.
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Dao, N. C.; Martin, T. V.; Goad, C. C.; Boulware, N. L.; Chin, M. M.
1972-01-01
A computer program for executive control routine for orbit integration of artificial satellites is presented. At the beginning of each arc, the program initiates required constants as well as the variational partials at epoch. If epoch needs to be reset to a previous time, the program negates the stepsize, and calls for integration backward to the desired time. After backward integration is completed, the program resets the stepsize to the proper positive quantity.
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Costing for the Future: Exploring Cost Estimation With Unmanned Autonomous Systems
2016-04-30
account for how cost estimating for autonomy is different than current methodologies and to suggest ways it can be addressed through the integration and...The Development stage involves refining the system requirements, creating a solution description , and building a system. 3. The Operational Test...parameter describes the extent to which efficient fabrication methodologies and processes are used, and the automation of labor-intensive operations
Human Systems Integration (HSI) in Acquisition. HSI Domain Guide
2009-08-01
job simulation that includes posture data , force parameters, and anthropometry . Output includes the percentage of men and women who have the strength...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of
Evolutionary optimization with data collocation for reverse engineering of biological networks.
Tsai, Kuan-Yao; Wang, Feng-Sheng
2005-04-01
Modern experimental biology is moving away from analyses of single elements to whole-organism measurements. Such measured time-course data contain a wealth of information about the structure and dynamic of the pathway or network. The dynamic modeling of the whole systems is formulated as a reverse problem that requires a well-suited mathematical model and a very efficient computational method to identify the model structure and parameters. Numerical integration for differential equations and finding global parameter values are still two major challenges in this field of the parameter estimation of nonlinear dynamic biological systems. We compare three techniques of parameter estimation for nonlinear dynamic biological systems. In the proposed scheme, the modified collocation method is applied to convert the differential equations to the system of algebraic equations. The observed time-course data are then substituted into the algebraic system equations to decouple system interactions in order to obtain the approximate model profiles. Hybrid differential evolution (HDE) with population size of five is able to find a global solution. The method is not only suited for parameter estimation but also can be applied for structure identification. The solution obtained by HDE is then used as the starting point for a local search method to yield the refined estimates.
Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.
2010-12-01
There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
NASA Astrophysics Data System (ADS)
de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.
2010-10-01
In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2010-09-01
The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.
Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter
2016-05-01
Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.
An inexpensive technique for the time resolved laser induced plasma spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Rizwan, E-mail: rizwan.ahmed@ncp.edu.pk; Ahmed, Nasar; Iqbal, J.
We present an efficient and inexpensive method for calculating the time resolved emission spectrum from the time integrated spectrum by monitoring the time evolution of neutral and singly ionized species in the laser produced plasma. To validate our assertion of extracting time resolved information from the time integrated spectrum, the time evolution data of the Cu II line at 481.29 nm and the molecular bands of AlO in the wavelength region (450–550 nm) have been studied. The plasma parameters were also estimated from the time resolved and time integrated spectra. A comparison of the results clearly reveals that the time resolved informationmore » about the plasma parameters can be extracted from the spectra registered with a time integrated spectrograph. Our proposed method will make the laser induced plasma spectroscopy robust and a low cost technique which is attractive for industry and environmental monitoring.« less
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Li, Ao; Liu, Zongzhi; Lezon-Geyda, Kimberly; Sarkar, Sudipa; Lannin, Donald; Schulz, Vincent; Krop, Ian; Winer, Eric; Harris, Lyndsay; Tuck, David
2011-01-01
There is an increasing interest in using single nucleotide polymorphism (SNP) genotyping arrays for profiling chromosomal rearrangements in tumors, as they allow simultaneous detection of copy number and loss of heterozygosity with high resolution. Critical issues such as signal baseline shift due to aneuploidy, normal cell contamination, and the presence of GC content bias have been reported to dramatically alter SNP array signals and complicate accurate identification of aberrations in cancer genomes. To address these issues, we propose a novel Global Parameter Hidden Markov Model (GPHMM) to unravel tangled genotyping data generated from tumor samples. In contrast to other HMM methods, a distinct feature of GPHMM is that the issues mentioned above are quantitatively modeled by global parameters and integrated within the statistical framework. We developed an efficient EM algorithm for parameter estimation. We evaluated performance on three data sets and show that GPHMM can correctly identify chromosomal aberrations in tumor samples containing as few as 10% cancer cells. Furthermore, we demonstrated that the estimation of global parameters in GPHMM provides information about the biological characteristics of tumor samples and the quality of genotyping signal from SNP array experiments, which is helpful for data quality control and outlier detection in cohort studies. PMID:21398628
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
NASA Astrophysics Data System (ADS)
Timmermans, J.; Gomez-Dans, J. L.; Verhoef, W.; Tol, C. V. D.; Lewis, P.
2017-12-01
Evapotranspiration (ET) cannot be directly measured from space. Instead it relies on modelling approaches that use several land surface parameters (LSP), LAI and LST, in conjunction with meteorological parameters. Such a modelling approach presents two caveats: the validity of the model, and the consistency between the different input parameters. Often this second step is not considered, ignoring that without good inputs no decent output can provided. When LSP- dynamics contradict each other, the output of the model cannot be representative of reality. At present however, the LSPs used in large scale ET estimations originate from different single-sensor retrieval-approaches and even from different satellite sensors. In response, the Earth Observation Land Data Assimilation System (EOLDAS) was developed. EOLDAS uses a multi-sensor approach to couple different satellite observations/types to radiative transfer models (RTM), consistently. It is therefore capable of synergistically estimating a variety of LSPs. Considering that ET is most sensitive to the temperatures of the land surface (components), the goal of this research is to expand EOLDAS to the thermal domain. This research not only focuses on estimating LST, but also on retrieving (soil/vegetation, Sunlit/shaded) component temperatures, to facilitate dual/quad-source ET models. To achieve this, The Soil Canopy Observations of Photosynthesis and Energy (SCOPE) model was integrated into EOLDAS. SCOPE couples key-parameters to key-processes, such as photosynthesis, ET and optical/thermal RT. In this research SCOPE was also coupled to MODTRAN RTM, in order to estimate BOA component temperatures directly from TOA observations. This paper presents the main modelling steps of integrating these complex models into an operational platform. In addition it highlights the actual retrieval using different satellite observations, such as MODIS and Sentinel-3, and meteorological variables from the ERA-Interim.
Earth-moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1975-01-01
A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.
A Stokes drift approximation based on the Phillips spectrum
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.
2016-04-01
A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.
Robust adaptive uniform exact tracking control for uncertain Euler-Lagrange system
NASA Astrophysics Data System (ADS)
Yang, Yana; Hua, Changchun; Li, Junpeng; Guan, Xinping
2017-12-01
This paper offers a solution to the robust adaptive uniform exact tracking control for uncertain nonlinear Euler-Lagrange (EL) system. An adaptive finite-time tracking control algorithm is designed by proposing a novel nonsingular integral terminal sliding-mode surface. Moreover, a new adaptive parameter tuning law is also developed by making good use of the system tracking errors and the adaptive parameter estimation errors. Thus, both the trajectory tracking and the parameter estimation can be achieved in a guaranteed time adjusted arbitrarily based on practical demands, simultaneously. Additionally, the control result for the EL system proposed in this paper can be extended to high-order nonlinear systems easily. Finally, a test-bed 2-DOF robot arm is set-up to demonstrate the performance of the new control algorithm.
Periodic orbits of hybrid systems and parameter estimation via AD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard
Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical modelsmore » of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance between two given periodic orbits which is then minimized using a trust-region minimization algorithm [DS83] to find optimal fits of the model to a reference orbit [Cas04]. There are two different yet related goals that motivate the algorithmic choices listed above. The first is to provide a simple yet powerful framework for studying periodic motions in mechanical systems. Formulating mechanically correct equations of motion for systems of interconnected rigid bodies, while straightforward, is a time-consuming error prone process. Much of this difficulty stems from computing the acceleration of each rigid body in an inertial reference frame. The acceleration is computed most easily in a redundant set of coordinates giving the spatial positions of each body: since the acceleration is just the second derivative of these positions. Rather than providing explicit formulas for these derivatives, automatic differentiation can be employed to compute these quantities efficiently during the course of a simulation. The feasibility of these ideas was investigated by applying these techniques to the problem of locating stable walking motions for a disc-foot passive walking machine [CGMR01, Gar99, McG91]. The second goal for this work was to investigate the application of smooth optimization methods to periodic orbit parameter estimation problems in neural oscillations. Others [BB93, FUS93, VB99] have favored non-continuous optimization methods such as genetic algorithms, stochastic search methods, simulated annealing and brute-force random searches because of their perceived suitability to the landscape of typical objective functions in parameter space, particularly for multi-compartmental neural models. Here we argue that a carefully formulated optimization problem is amenable to Newton-like methods and has a sufficiently smooth landscape in parameter space that these methods can be an efficient and effective alternative. The plan of this paper is as follows. In Section 1 we provide a definition of hybrid systems that is the basis for modeling systems with discontinuities or discrete transitions. Sections 2, 3, and 4 briefly describe the Taylor series integration, periodic orbit tracking, and parameter estimation algorithms. For full treatments of these algorithms, we refer the reader to [Phi03, Cas04, CPG04]. The software implementation of these algorithms is briefly described in Section 5 with particular emphasis on the automatic differentiation software ADMC++. Finally, these algorithms are applied to the bipedal walking and Hodgkin-Huxley based neural oscillation problems discussed above in Section 6.« less
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
Geophysical Assessment of Groundwater Potential: A Case Study from Mian Channu Area, Pakistan.
Hasan, Muhammad; Shang, Yanjun; Akhter, Gulraiz; Jin, Weijun
2017-11-17
An integrated study using geophysical method in combination with pumping tests and geochemical method was carried out to delineate groundwater potential zones in Mian Channu area of Pakistan. Vertical electrical soundings (VES) using Schlumberger configuration with maximum current electrode spacing (AB/2 = 200 m) were conducted at 50 stations and 10 pumping tests at borehole sites were performed in close proximity to 10 of the VES stations. The aim of this study is to establish a correlation between the hydraulic parameters obtained from geophysical method and pumping tests so that the aquifer potential can be estimated from the geoelectrical surface measurements where no pumping tests exist. The aquifer parameters, namely, transmissivity and hydraulic conductivity were estimated from Dar Zarrouyk parameters by interpreting the layer parameters such as true resistivities and thicknesses. Geoelectrical succession of five-layer strata (i.e., topsoil, clay, clay sand, sand, and sand gravel) with sand as a dominant lithology was found in the study area. Physicochemical parameters interpreted by World Health Organization and Food and Agriculture Organization were well correlated with the aquifer parameters obtained by geoelectrical method and pumping tests. The aquifer potential zones identified by modeled resistivity, Dar Zarrouk parameters, pumped aquifer parameters, and physicochemical parameters reveal that sand and gravel sand with high values of transmissivity and hydraulic conductivity are highly promising water bearing layers in northwest of the study area. Strong correlation between estimated and pumped aquifer parameters suggest that, in case of sparse well data, geophysical technique is useful to estimate the hydraulic potential of the aquifer with varying lithology. © 2017, National Ground Water Association.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
NASA Astrophysics Data System (ADS)
Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.
The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.
Uncertainty and the Social Cost of Methane Using Bayesian Constrained Climate Models
NASA Astrophysics Data System (ADS)
Errickson, F. C.; Anthoff, D.; Keller, K.
2016-12-01
Social cost estimates of greenhouse gases are important for the design of sound climate policies and are also plagued by uncertainty. One major source of uncertainty stems from the simplified representation of the climate system used in the integrated assessment models that provide these social cost estimates. We explore how uncertainty over the social cost of methane varies with the way physical processes and feedbacks in the methane cycle are modeled by (i) coupling three different methane models to a simple climate model, (ii) using MCMC to perform a Bayesian calibration of the three coupled climate models that simulates direct sampling from the joint posterior probability density function (pdf) of model parameters, and (iii) producing probabilistic climate projections that are then used to calculate the Social Cost of Methane (SCM) with the DICE and FUND integrated assessment models. We find that including a temperature feedback in the methane cycle acts as an additional constraint during the calibration process and results in a correlation between the tropospheric lifetime of methane and several climate model parameters. This correlation is not seen in the models lacking this feedback. Several of the estimated marginal pdfs of the model parameters also exhibit different distributional shapes and expected values depending on the methane model used. As a result, probabilistic projections of the climate system out to the year 2300 exhibit different levels of uncertainty and magnitudes of warming for each of the three models under an RCP8.5 scenario. We find these differences in climate projections result in differences in the distributions and expected values for our estimates of the SCM. We also examine uncertainty about the SCM by performing a Monte Carlo analysis using a distribution for the climate sensitivity while holding all other climate model parameters constant. Our SCM estimates using the Bayesian calibration are lower and exhibit less uncertainty about extremely high values in the right tail of the distribution compared to the Monte Carlo approach. This finding has important climate policy implications and suggests previous work that accounts for climate model uncertainty by only varying the climate sensitivity parameter may overestimate the SCM.
Bogdan, Anna; Sudoł-Szopińska, Iwona; Luczak, Anna; Konarska, Maria; Pietrowski, Piotr
2012-01-01
This article proposes a method for a comprehensive assessment of the effect of integral motorcycle helmets on physiological and cognitive responses of motorcyclists. To verify the reliability of commonly used tests, we conducted experiments with 5 motorcyclists. We recorded changes in physiological parameters (heart rate, local skin temperature, core temperature, air temperature, relative humidity in the space between the helmet and the surface of the head, and the concentration of O(2) and CO(2) under the helmet) and in psychological parameters (motorcyclists' reflexes, fatigue, perceptiveness and mood). We also studied changes in the motorcyclists' subjective sensation of thermal comfort. The results made it possible to identify reliable parameters for assessing the effect of integral helmets on performance, i.e., physiological factors (head skin temperature, internal temperature and concentration of O(2) and CO(2) under the helmet) and on psychomotor factors (reaction time, attention and vigilance, work performance, concentration and a subjective feeling of mood and fatigue).
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile
2017-03-01
Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, aremore » related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.« less
Quantifying cell turnover using CFSE data.
Ganusov, Vitaly V; Pilyugin, Sergei S; de Boer, Rob J; Murali-Krishna, Kaja; Ahmed, Rafi; Antia, Rustom
2005-03-01
The CFSE dye dilution assay is widely used to determine the number of divisions a given CFSE labelled cell has undergone in vitro and in vivo. In this paper, we consider how the data obtained with the use of CFSE (CFSE data) can be used to estimate the parameters determining cell division and death. For a homogeneous cell population (i.e., a population with the parameters for cell division and death being independent of time and the number of divisions cells have undergone), we consider a specific biologically based "Smith-Martin" model of cell turnover and analyze three different techniques for estimation of its parameters: direct fitting, indirect fitting and rescaling method. We find that using only CFSE data, the duration of the division phase (i.e., approximately the S+G2+M phase of the cell cycle) can be estimated with the use of either technique. In some cases, the average division or cell cycle time can be estimated using the direct fitting of the model solution to the data or by using the Gett-Hodgkin method [Gett A. and Hodgkin, P. 2000. A cellular calculus for signal integration by T cells. Nat. Immunol. 1:239-244]. Estimation of the death rates during commitment to division (i.e., approximately the G1 phase of the cell cycle) and during the division phase may not be feasible with the use of only CFSE data. We propose that measuring an additional parameter, the fraction of cells in division, may allow estimation of all model parameters including the death rates during different stages of the cell cycle.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1986-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Singh, A.; Behrangi, A.; Fisher, J.; Reager, J. T., II; Gardner, A. S.
2017-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
Utilizing a suite of satellite missions to address poorly constrained hydrological fluxes
NASA Astrophysics Data System (ADS)
Shukla, S.; Hobbins, M.; McEvoy, D.; Husak, G. J.; Dewes, C.; McNally, A.; Huntington, J. L.; Funk, C. C.; Verdin, J. P.
2016-12-01
The amount of water stored in a given region (total water storage) changes in response to changes in the hydrologic balance (inputs minus outputs). Closing this balance is exceedingly difficult due to the sparsity of field observation, large uncertainties in satellite derived estimates and model limitation. Different regions have distinct reliability on different hydrological parameters. For example, at a higher latitude precipitation is more uncertain than evapotranspiration (ET) while at lower/middle latitude the opposite is true. This study explores alternative estimates of regional hydrological fluxes by integrating the total water storage estimated by the GRACE gravity fields, and improved estimates lake storage variation by Landsat based land-water classification and satellite altimetry based water height measurements. In particular, an alternative ET estimate is generated for the Aral Sea region by integrating multi-sensor remote sensing data. In an endorheic lake like the Aral Sea, its volumetric variations are predominately governed by changes in inflow, evaporation from the water body and precipitation on the lake. The Aral Sea water volume is estimated at a monthly time step by the combination of Landsat land-water classification and ocean radar altimetry (Jason 1 and Jason 2) observations using truncated pyramid method. Considering gauge based river runoff as a true observation and given the fact that there is less variability between multiple precipitation datasets (TRMM, GPCP, GPCC, and ERA), ET can be considered as a most uncertain parameter in this region. The estimated lake volume acts as a controlling factor to estimate ET as the residual of the changes in TWS minus inflow plus precipitation. The estimated ET is compared with the MODIS-based evaporation observations.
Generalized correlation integral vectors: A distance concept for chaotic dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haario, Heikki, E-mail: heikki.haario@lut.fi; Kalachev, Leonid, E-mail: KalachevL@mso.umt.edu; Hakkarainen, Janne
2015-06-15
Several concepts of fractal dimension have been developed to characterise properties of attractors of chaotic dynamical systems. Numerical approximations of them must be calculated by finite samples of simulated trajectories. In principle, the quantities should not depend on the choice of the trajectory, as long as it provides properly distributed samples of the underlying attractor. In practice, however, the trajectories are sensitive with respect to varying initial values, small changes of the model parameters, to the choice of a solver, numeric tolerances, etc. The purpose of this paper is to present a statistically sound approach to quantify this variability. Wemore » modify the concept of correlation integral to produce a vector that summarises the variability at all selected scales. The distribution of this stochastic vector can be estimated, and it provides a statistical distance concept between trajectories. Here, we demonstrate the use of the distance for the purpose of estimating model parameters of a chaotic dynamic model. The methodology is illustrated using computational examples for the Lorenz 63 and Lorenz 95 systems, together with a framework for Markov chain Monte Carlo sampling to produce posterior distributions of model parameters.« less
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2010-12-01
The results of the experimental estimations on cluster "Skif Cyberia" of Everhart's numerical integration accuracy and rapidness are presented. The integration has been carried out for celestial bodies' equations of motion such as N-body problem equations and perturbed two-body problem equations. In the last case the perturbing bodies' coordinates are being taked during calculations from the ephemeris DE406. The accuracy and rapidness estimations have been made by means of forward and backward integrations with various values of Everhart method parameters of motion equations of the short-periodic comet Herschel-Rigollet. The optimal combinations of these parameters have been obtained. The research has been made both for 16-digit decimal accuracy and for 34-digit one.
NASA Technical Reports Server (NTRS)
Gerberich, Matthew W.; Oleson, Steven R.
2013-01-01
The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.
Hidden Markov model for dependent mark loss and survival estimation
Laake, Jeffrey L.; Johnson, Devin S.; Diefenbach, Duane R.; Ternent, Mark A.
2014-01-01
Mark-recapture estimators assume no loss of marks to provide unbiased estimates of population parameters. We describe a hidden Markov model (HMM) framework that integrates a mark loss model with a Cormack–Jolly–Seber model for survival estimation. Mark loss can be estimated with single-marked animals as long as a sub-sample of animals has a permanent mark. Double-marking provides an estimate of mark loss assuming independence but dependence can be modeled with a permanently marked sub-sample. We use a log-linear approach to include covariates for mark loss and dependence which is more flexible than existing published methods for integrated models. The HMM approach is demonstrated with a dataset of black bears (Ursus americanus) with two ear tags and a subset of which were permanently marked with tattoos. The data were analyzed with and without the tattoo. Dropping the tattoos resulted in estimates of survival that were reduced by 0.005–0.035 due to tag loss dependence that could not be modeled. We also analyzed the data with and without the tattoo using a single tag. By not using.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2014-10-01
A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.
Determination of power system component parameters using nonlinear dead beat estimation method
NASA Astrophysics Data System (ADS)
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.
New formulations for tsunami runup estimation
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Aydin, B.; Ceylan, N.
2017-12-01
We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1989-01-01
Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013
Big data integration for regional hydrostratigraphic mapping
NASA Astrophysics Data System (ADS)
Friedel, M. J.
2013-12-01
Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in devising these models remains subjective, nonunique, and uncertain. A novel geophysical-hydrogeologic data integration scheme is proposed to constrain the estimation of continuous HSUs. First, machine-learning and multivariate statistical techniques are used to simultaneously integrate borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Second, airborne electromagnetic measurements are numerically inverted to obtain subsurface resistivity structure at randomly selected locations. Third, the machine-learning algorithm is trained using the borehole hydrostratigraphic units and inverted airborne resistivity profiles. The trained machine-learning algorithm is then used to estimate HSUs at independent resistivity profile locations. We demonstrate efficacy of the proposed approach to map the hydrostratigraphy of a heterogeneous surficial aquifer in northwestern Nebraska.
NASA Astrophysics Data System (ADS)
Teng, W. L.; de Jeu, R. A.; Doraiswamy, P. C.; Kempler, S. J.; Shannon, H. D.
2009-12-01
A primary goal of the U.S. Department of Agriculture (USDA) is to expand markets for U.S. agricultural products and support global economic development. The USDA World Agricultural Outlook Board (WAOB) supports this goal by developing monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Because weather has a significant impact on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments, in a GIS-based, Global Agricultural Decision Support Environment (GLADSE). The main objective of this project, thus, is to improve WAOB's estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE. Soil moisture is a primary data gap at WAOB. Soil moisture data, generated by the Land Parameter Retrieval Model (LPRM, developed by NASA GSFC and Vrije Universiteit Amsterdam) and customized to WAOB's requirements, will be directly integrated into GLADSE, as well as indirectly by first being integrated into USDA Agricultural Research Service (ARS)'s Environmental Policy Integrated Climate (EPIC) crop model. The LPRM-enhanced EPIC will be validated using three major agricultural regions important to WAOB and then integrated into GLADSE. Project benchmarking will be based on retrospective analyses of WAOB's analog year comparisons. The latter are between a given year and historical years with similar weather patterns. WAOB is the focal point for economic intelligence within the USDA. Thus, improving WAOB's agricultural estimates by integrating NASA satellite observations and model outputs will visibly demonstrate the value of NASA resources and maximize the societal benefits of NASA investments.
NASA Astrophysics Data System (ADS)
George, N. J.; Akpan, A. E.; Akpan, F. S.
2017-12-01
An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.
Beste, A; Harrison, R J; Yanai, T
2006-08-21
Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (cf. thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a nongeometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as the constraint on the orbitals to be orthogonal.
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
Adaptive estimation of the log fluctuating conductivity from tracer data at the Cape Cod Site
Deng, F.W.; Cushman, J.H.; Delleur, J.W.
1993-01-01
An adaptive estimation scheme is used to obtain the integral scale and variance of the log-fluctuating conductivity at the Cape Cod site based on the fast Fourier transform/stochastic model of Deng et al. (1993) and a Kalmanlike filter. The filter incorporates prior estimates of the unknown parameters with tracer moment data to adaptively obtain improved estimates as the tracer evolves. The results show that significant improvement in the prior estimates of the conductivity can lead to substantial improvement in the ability to predict plume movement. The structure of the covariance function of the log-fluctuating conductivity can be identified from the robustness of the estimation. Both the longitudinal and transverse spatial moment data are important to the estimation.
An integrated study of earth resources in the state of California using remote sensing techniques
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1977-01-01
The author has identified the following significant results. The effects on estimates of monthly volume runoff were determined separately for each of the following parameters: precipitation, evapotranspiration, lower zone and upper zone tension water capacity, imperviousness of the watershed, and percent of the watershed occupied by riparian vegetation, streams, and lakes. The most sensitive and critical parameters were found to be precipitation during the entire year and springtime evapotranspiration.
NASA Astrophysics Data System (ADS)
Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin
2017-12-01
Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.
NASA Astrophysics Data System (ADS)
Ibanez, C. A. G.; Carcellar, B. G., III; Paringit, E. C.; Argamosa, R. J. L.; Faelga, R. A. G.; Posilero, M. A. V.; Zaragosa, G. P.; Dimayacyac, N. A.
2016-06-01
Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike's Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett's Test for Spherecity (BTS).
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Bayesian estimation of the transmissivity spatial structure from pumping test data
NASA Astrophysics Data System (ADS)
Demir, Mehmet Taner; Copty, Nadim K.; Trinchero, Paolo; Sanchez-Vila, Xavier
2017-06-01
Estimating the statistical parameters (mean, variance, and integral scale) that define the spatial structure of the transmissivity or hydraulic conductivity fields is a fundamental step for the accurate prediction of subsurface flow and contaminant transport. In practice, the determination of the spatial structure is a challenge because of spatial heterogeneity and data scarcity. In this paper, we describe a novel approach that uses time drawdown data from multiple pumping tests to determine the transmissivity statistical spatial structure. The method builds on the pumping test interpretation procedure of Copty et al. (2011) (Continuous Derivation method, CD), which uses the time-drawdown data and its time derivative to estimate apparent transmissivity values as a function of radial distance from the pumping well. A Bayesian approach is then used to infer the statistical parameters of the transmissivity field by combining prior information about the parameters and the likelihood function expressed in terms of radially-dependent apparent transmissivities determined from pumping tests. A major advantage of the proposed Bayesian approach is that the likelihood function is readily determined from randomly generated multiple realizations of the transmissivity field, without the need to solve the groundwater flow equation. Applying the method to synthetically-generated pumping test data, we demonstrate that, through a relatively simple procedure, information on the spatial structure of the transmissivity may be inferred from pumping tests data. It is also shown that the prior parameter distribution has a significant influence on the estimation procedure, given the non-uniqueness of the estimation procedure. Results also indicate that the reliability of the estimated transmissivity statistical parameters increases with the number of available pumping tests.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
To facilitate evaluation of existing site characterization data, ORD has developed on-line tools and models that integrate data and models into innovative applications. Forty calculators have been developed in four groups: parameter estimators, models, scientific demos and unit ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cumberland, Riley M.; Williams, Kent Alan; Jarrell, Joshua J.
This report evaluates how the economic environment (i.e., discount rate, inflation rate, escalation rate) can impact previously estimated differences in lifecycle costs between an integrated waste management system with an interim storage facility (ISF) and a similar system without an ISF.
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
NASA Astrophysics Data System (ADS)
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
Parameter Estimation of a Spiking Silicon Neuron
Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph
2012-01-01
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978
Space Shuttle Main Engine performance analysis
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Quantum-enhanced multiparameter estimation in multiarm interferometers
Ciampini, Mario A.; Spagnolo, Nicolò; Vitelli, Chiara; Pezzè, Luca; Smerzi, Augusto; Sciarrino, Fabio
2016-01-01
Quantum metrology is the state-of-the-art measurement technology. It uses quantum resources to enhance the sensitivity of phase estimation over that achievable by classical physics. While single parameter estimation theory has been widely investigated, much less is known about the simultaneous estimation of multiple phases, which finds key applications in imaging and sensing. In this manuscript we provide conditions of useful particle (qudit) entanglement for multiphase estimation and adapt them to multiarm Mach-Zehnder interferometry. We theoretically discuss benchmark multimode Fock states containing useful qudit entanglement and overcoming the sensitivity of separable qudit states in three and four arm Mach-Zehnder-like interferometers - currently within the reach of integrated photonics technology. PMID:27381743
Genetic algorithms for the application of Activated Sludge Model No. 1.
Kim, S; Lee, H; Kim, J; Kim, C; Ko, J; Woo, H; Kim, S
2002-01-01
The genetic algorithm (GA) has been integrated into the IWA ASM No. 1 to calibrate important stoichiometric and kinetic parameters. The evolutionary feature of GA was used to configure the multiple local optima as well as the global optimum. The objective function of optimization was designed to minimize the difference between estimated and measured effluent concentrations at the activated sludge system. Both steady state and dynamic data of the simulation benchmark were used for calibration using denitrification layout. Depending upon the confidence intervals and objective functions, the proposed method provided distributions of parameter space. Field data have been collected and applied to validate calibration capacity of GA. Dynamic calibration was suggested to capture periodic variations of inflow concentrations. Also, in order to verify this proposed method in real wastewater treatment plant, measured data sets for substrate concentrations were obtained from Haeundae wastewater treatment plant and used to estimate parameters in the dynamic system. The simulation results with calibrated parameters matched well with the observed concentrations of effluent COD.
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Mirzaeinejad, Hossein; Mirzaei, Mehdi; Rafatnia, Sadra
2018-06-11
This study deals with the enhancement of directional stability of vehicle which turns with high speeds on various road conditions using integrated active steering and differential braking systems. In this respect, the minimum usage of intentional asymmetric braking force to compensate the drawbacks of active steering control with small reduction of vehicle longitudinal speed is desired. To this aim, a new optimal multivariable controller is analytically developed for integrated steering and braking systems based on the prediction of vehicle nonlinear responses. A fuzzy programming extracted from the nonlinear phase plane analysis is also used for managing the two control inputs in various driving conditions. With the proposed fuzzy programming, the weight factors of the control inputs are automatically tuned and softly changed. In order to simulate a real-world control system, some required information about the system states and parameters which cannot be directly measured, are estimated using the Unscented Kalman Filter (UKF). Finally, simulations studies are carried out using a validated vehicle model to show the effectiveness of the proposed integrated control system in the presence of model uncertainties and estimation errors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
The use of subjective rating of exertion in Ergonomics.
Capodaglio, P
2002-01-01
In Ergonomics, the use of psychophysical methods for subjectively evaluating work tasks and determining acceptable loads has become more common. Daily activities at the work site are studied not only with physiological methods but also with perceptual estimation and production methods. The psychophysical methods are of special interest in field studies of short-term work tasks for which valid physiological measurements are difficult to obtain. The perceived exertion, difficulty and fatigue that a person experiences in a certain work situation is an important sign of a real or objective load. Measurement of the physical load with physiological parameters is not sufficient since it does not take into consideration the particular difficulty of the performance or the capacity of the individual. It is often difficult from technical and biomechanical analyses to understand the seriousness of a difficulty that a person experiences. Physiological determinations give important information, but they may be insufficient due to the technical problems in obtaining relevant but simple measurements for short-term activities or activities involving special movement patterns. Perceptual estimations using Borg's scales give important information because the severity of a task's difficulty depends on the individual doing the work. Observation is the most simple and used means to assess job demands. Other evaluations integrating observation are the followings: indirect estimation of energy expenditure based on prediction equations or direct measurement of oxygen consumption; measurements of forces, angles and biomechanical parameters; measurements of physiological and neurophysiological parameters during tasks. It is recommended that determinations of performances of occupational activities assess rating of perceived exertion and integrate these measurements of intensity levels with those of activity's type, duration and frequency. A better estimate of the degree of physical activity of individuals thus can be obtained.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
NASA Technical Reports Server (NTRS)
Schkolnik, Gerard S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
NASA Technical Reports Server (NTRS)
Schkolnik, Gerald S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa
NASA Astrophysics Data System (ADS)
Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.
2014-12-01
Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
NASA Astrophysics Data System (ADS)
Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.
2018-06-01
Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.
Statistical techniques for the characterization of partially observed epidemics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Ray, Jaideep; Crary, David
Techniques appear promising to construct and integrate automated detect-and-characterize technique for epidemics - Working off biosurveillance data, and provides information on the particular/ongoing outbreak. Potential use - in crisis management and planning, resource allocation - Parameter estimation capability ideal for providing the input parameters into an agent-based model, Index Cases, Time of Infection, infection rate. Non-communicable diseases are easier than communicable ones - Small anthrax can be characterized well with 7-10 days of data, post-detection; plague takes longer, Large attacks are very easy.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
An application of fractional integration to a long temperature series
NASA Astrophysics Data System (ADS)
Gil-Alana, L. A.
2003-11-01
Some recently proposed techniques of fractional integration are applied to a long UK temperature series. The tests are valid under general forms of serial correlation and do not require estimation of the fractional differencing parameter. The results show that central England temperatures have increased about 0.23 °C per 100 years in recent history. Attempting to summarize the conclusions for each of the months, we are left with the impression that the highest increase has occurred during the months from October to March.
Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos
NASA Technical Reports Server (NTRS)
Hill, R. E.
1989-01-01
Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
NASA Astrophysics Data System (ADS)
Henderson, Laura S.; Subbarao, Kamesh
2017-12-01
This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.
Lim, Tau Meng; Cheng, Shanbao; Chua, Leok Poh
2009-07-01
Axial flow blood pumps are generally smaller as compared to centrifugal pumps. This is very beneficial because they can provide better anatomical fit in the chest cavity, as well as lower the risk of infection. This article discusses the design, levitated responses, and parameter estimation of the dynamic characteristics of a compact hybrid magnetic bearing (HMB) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet brushless and sensorless motor. It is levitated by two HMBs at both ends in five degree of freedom with proportional-integral-derivative controllers, among which four radial directions are actively controlled and one axial direction is passively controlled. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMB system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air-in both the radial and axial directions. Experimental estimation showed that the dynamic characteristics of the HMB system are dominated by the frequency-dependent stiffness coefficients. By injecting a multifrequency excitation force signal onto the rotor through the HMBs, it is noticed in the experimental results the maximum displacement linear operating range is 20% of the static eccentricity with respect to the rotor and stator gap clearance. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamic properties under normal operating conditions with fluid.
Evaluating performances of simplified physically based landslide susceptibility models.
NASA Astrophysics Data System (ADS)
Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale
2015-04-01
Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.
Zimmerman, Guthrie S.; Sauer, John; Boomer, G. Scott; Devers, Patrick K.; Garrettson, Pamela R.
2017-01-01
The U.S. Fish and Wildlife Service (USFWS) uses data from the North American Breeding Bird Survey (BBS) to assist in monitoring and management of some migratory birds. However, BBS analyses provide indices of population change rather than estimates of population size, precluding their use in developing abundance-based objectives and limiting applicability to harvest management. Wood Ducks (Aix sponsa) are important harvested birds in the Atlantic Flyway (AF) that are difficult to detect during aerial surveys because they prefer forested habitat. We integrated Wood Duck count data from a ground-plot survey in the northeastern U.S. with AF-wide BBS, banding, parts collection, and harvest data to derive estimates of population size for the AF. Overlapping results between the smaller-scale intensive ground-plot survey and the BBS in the northeastern U.S. provided a means for scaling BBS indices to the breeding population size estimates. We applied these scaling factors to BBS results for portions of the AF lacking intensive surveys. Banding data provided estimates of annual survival and harvest rates; the latter, when combined with parts-collection data, provided estimates of recruitment. We used the harvest data to estimate fall population size. Our estimates of breeding population size and variability from the integrated population model (N̄ = 0.99 million, SD = 0.04) were similar to estimates of breeding population size based solely on data from the AF ground-plot surveys and the BBS (N̄ = 1.01 million, SD = 0.04) from 1998 to 2015. Integrating BBS data with other data provided reliable population size estimates for Wood Ducks at a scale useful for harvest and habitat management in the AF, and allowed us to derive estimates of important demographic parameters (e.g., seasonal survival rates, sex ratio) that were not directly informed by data.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoyuan; Zhang, Hui; Yang, Bo; Zhang, Guichen
2018-01-01
In order to improve oscillation damping control performance as well as gear shift quality of electric vehicle equipped with integrated motor-transmission system, a cloud-based shaft torque estimation scheme is proposed in this paper by using measurable motor and wheel speed signals transmitted by wireless network. It can help reduce computational burden of onboard controllers and also relief network bandwidth requirement of individual vehicle. Considering possible delays during signal wireless transmission, delay-dependent full-order observer design is proposed to estimate the shaft torque in cloud server. With these random delays modeled by using homogenous Markov chain, robust H∞ performance is adopted to minimize the effect of wireless network-induced delays, signal measurement noise as well as system modeling uncertainties on shaft torque estimation error. Observer parameters are derived by solving linear matrix inequalities, and simulation results using acceleration test and tip-in, tip-out test demonstrate the effectiveness of proposed shaft torque observer design.
Improved Anomaly Detection using Integrated Supervised and Unsupervised Processing
NASA Astrophysics Data System (ADS)
Hunt, B.; Sheppard, D. G.; Wetterer, C. J.
There are two broad technologies of signal processing applicable to space object feature identification using nonresolved imagery: supervised processing analyzes a large set of data for common characteristics that can be then used to identify, transform, and extract information from new data taken of the same given class (e.g. support vector machine); unsupervised processing utilizes detailed physics-based models that generate comparison data that can then be used to estimate parameters presumed to be governed by the same models (e.g. estimation filters). Both processes have been used in non-resolved space object identification and yield similar results yet arrived at using vastly different processes. The goal of integrating the results of the two is to seek to achieve an even greater performance by building on the process diversity. Specifically, both supervised processing and unsupervised processing will jointly operate on the analysis of brightness (radiometric flux intensity) measurements reflected by space objects and observed by a ground station to determine whether a particular day conforms to a nominal operating mode (as determined from a training set) or exhibits anomalous behavior where a particular parameter (e.g. attitude, solar panel articulation angle) has changed in some way. It is demonstrated in a variety of different scenarios that the integrated process achieves a greater performance than each of the separate processes alone.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Senay, Gabriel B.
2008-01-01
The main objective of this study is to present an improved modeling technique called Vegetation ET (VegET) that integrates commonly used water balance algorithms with remotely sensed Land Surface Phenology (LSP) parameter to conduct operational vegetation water balance modeling of rainfed systems at the LSP’s spatial scale using readily available global data sets. Evaluation of the VegET model was conducted using Flux Tower data and two-year simulation for the conterminous US. The VegET model is capable of estimating actual evapotranspiration (ETa) of rainfed crops and other vegetation types at the spatial resolution of the LSP on a daily basis, replacing the need to estimate crop- and region-specific crop coefficients.
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
NASA Astrophysics Data System (ADS)
Adha, Kurniawan; Yusoff, Wan Ismail Wan; Almanna Lubis, Luluan
2017-10-01
Determining the pore pressure data and overpressure zone is a compulsory part of oil and gas exploration in which the data can enhance the safety with profit and preventing the drilling hazards. Investigation of thermophysical parameters such as temperature and thermal conductivity can enhance the pore pressure estimation for overpressure mechanism determination. Since those parameters are dependent on rock properties, it may reflect the changes on the column of thermophysical parameters when there is abnormally in pore pressure. The study was conducted in “MRI 1” well offshore Sarawak, where a new approach method designed to determine the overpressure generation. The study was insisted the contribution of thermophysical parameters for supporting the velocity analysis method, petrophysical analysis were done in these studies. Four thermal facies were identified along the well. The overpressure developed below the thermal facies 4, where the pressure reached 38 Mpa and temperature was increasing significantly. The velocity and the thermal conductivity cross plots shows a linear relationship since the both parameters mainly are the function of the rock compaction. When the rock more compact, the particles were brought closer into contact and making the sound wave going faster while the thermal conductivity were increasing. In addition, the increment of temperature and high heat flow indicated the presence of fluid expansion mechanism. Since the shale sonic velocity and density analysis were the common methods in overpressure mechanism and pore pressure estimation. As the addition parameters for determining overpressure zone, the presence of thermophysical analysis was enhancing the current method, where the current method was the single function of velocity analysis. The presence of thermophysical analysis will improve the understanding in overpressure mechanism determination as the new input parameters. Thus, integrated of thermophysical technique and velocity analysis are important parameters in investigating the overpressure mechanisms and pore pressure estimation during oil and gas exploitation in the future.
NASA Technical Reports Server (NTRS)
Srivastava, Prashant K.; Han, Dawei; Rico-Ramirez, Miguel A.; O'Neill, Peggy; Islam, Tanvir; Gupta, Manika
2014-01-01
Soil Moisture and Ocean Salinity (SMOS) is the latest mission which provides flow of coarse resolution soil moisture data for land applications. However, the efficient retrieval of soil moisture for hydrological applications depends on optimally choosing the soil and vegetation parameters. The first stage of this work involves the evaluation of SMOS Level 2 products and then several approaches for soil moisture retrieval from SMOS brightness temperature are performed to estimate Soil Moisture Deficit (SMD). The most widely applied algorithm i.e. Single channel algorithm (SCA), based on tau-omega is used in this study for the soil moisture retrieval. In tau-omega, the soil moisture is retrieved using the Horizontal (H) polarisation following Hallikainen dielectric model, roughness parameters, Fresnel's equation and estimated Vegetation Optical Depth (tau). The roughness parameters are empirically calibrated using the numerical optimization techniques. Further to explore the improvement in retrieval models, modifications have been incorporated in the algorithms with respect to the sources of the parameters, which include effective temperatures derived from the European Center for Medium-Range Weather Forecasts (ECMWF) downscaled using the Weather Research and Forecasting (WRF)-NOAH Land Surface Model and Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) while the s is derived from MODIS Leaf Area Index (LAI). All the evaluations are performed against SMD, which is estimated using the Probability Distributed Model following a careful calibration and validation integrated with sensitivity and uncertainty analysis. The performance obtained after all those changes indicate that SCA-H using WRF-NOAH LSM downscaled ECMWF LST produces an improved performance for SMD estimation at a catchment scale.
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
An inverse finance problem for estimation of the volatility
NASA Astrophysics Data System (ADS)
Neisy, A.; Salmani, K.
2013-01-01
Black-Scholes model, as a base model for pricing in derivatives markets has some deficiencies, such as ignoring market jumps, and considering market volatility as a constant factor. In this article, we introduce a pricing model for European-Options under jump-diffusion underlying asset. Then, using some appropriate numerical methods we try to solve this model with integral term, and terms including derivative. Finally, considering volatility as an unknown parameter, we try to estimate it by using our proposed model. For the purpose of estimating volatility, in this article, we utilize inverse problem, in which inverse problem model is first defined, and then volatility is estimated using minimization function with Tikhonov regularization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
NASA Astrophysics Data System (ADS)
Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad
2017-12-01
This article offers a study on estimation of heat transfer parameters (coefficient and thermal diffusivity) using analytical solutions and experimental data for regular geometric shapes (such as infinite slab, infinite cylinder, and sphere). Analytical solutions have a broad use in experimentally determining these parameters. Here, the method of Finite Integral Transform (FIT) was used for solutions of governing differential equations. The temperature change at centerline location of regular shapes was recorded to determine both the thermal diffusivity and heat transfer coefficient. Aluminum and brass were used for testing. Experiments were performed for different conditions such as in a highly agitated water medium ( T = 52 °C) and in air medium ( T = 25 °C). Then, with the known slope of the temperature ratio vs. time curve and thickness of slab or radius of the cylindrical or spherical materials, thermal diffusivity value and heat transfer coefficient may be determined. According to the method presented in this study, the estimated of thermal diffusivity of aluminum and brass is 8.395 × 10-5 and 3.42 × 10-5 for a slab, 8.367 × 10-5 and 3.41 × 10-5 for a cylindrical rod and 8.385 × 10-5 and 3.40 × 10-5 m2/s for a spherical shape, respectively. The results showed there is close agreement between the values estimated here and those already published in the literature. The TAAD% is 0.42 and 0.39 for thermal diffusivity of aluminum and brass, respectively.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam
2015-07-01
Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente
2017-04-29
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.
Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente
2017-01-01
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252
NASA Astrophysics Data System (ADS)
Ma, H.
2016-12-01
Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface parameters are generally parameter-specific algorithms and are based on instantaneous physical models, which result in spatial, temporal and physical inconsistencies in current global products. Besides, optical and Thermal Infrared (TIR) remote sensing observations are usually separated to use based on different models , and the Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal that mixes both reflected and emitted fluxes. In this paper, we proposed a unified algorithm for simultaneously retrieving a total of seven land surface parameters, including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Temperature (LST), surface emissivity, downward and upward longwave radiation, by exploiting remote sensing observations from visible to TIR domain based on a common physical Radiative Transfer (RT) model and a data assimilation framework. The coupled PROSPECT-VISIR and 4SAIL RT model were used for canopy reflectance modeling. At first, LAI was estimated using a data assimilation method that combines MODIS daily reflectance observation and a phenology model. The estimated LAI values were then input into the RT model to simulate surface spectral emissivity and surface albedo. Besides, the background albedo and the transmittance of solar radiation, and the canopy albedo were also calculated to produce FAPAR. Once the spectral emissivity of seven MODIS MIR to TIR bands were retrieved, LST can be estimated from the atmospheric corrected surface radiance by exploiting an optimization method. At last, the upward longwave radiation were estimated using the retrieved LST, broadband emissivity (converted from spectral emissivity) and the downward longwave radiation (modeled by MODTRAN). These seven parameters were validated over several representative sites with different biome type, and compared with MODIS and GLASS product. Results showed that this unified inversion algorithm can retrieve temporally complete and physical consistent land surface parameters with high accuracy.
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
IVS Tropospheric Parameters: Comparison with DORIS and GPS for CONT02
NASA Technical Reports Server (NTRS)
Schuh, Harald; Snajdrova, Kristyna; Boehm, Johannes; Willis, Pascal; Engelhardt, Gerald; Lanotte, Roberto; Tomasi, Paolo; Negusini, Monia; MacMillan, Daniel; Vereshchagina, Iraida
2004-01-01
In April 2002 the IVS (International VLBI Service for Geodesy and Astrometry) set up the Pilot Project - Tropospheric Parameters, and the Institute of Geodesy and Geophysics (IGG), Vienna, was put in charge of coordinating the project. Seven IVS Analysis Centers have joined the project and regularly submitted their estimates of tropospheric parameters (wet and total zenith delays, horizontal gradients) for all IVS-R1 mid IVS-R4 sessions since January 1st, 2002. The individual submissions are combined by a two-step procedure to obtain stable, robust and highly accurate tropospheric parameter time series with one hour resolution (internal accuracy: 2-4 ram). Starting with July 2003, the combined tropospheric estimates became operational IVS products. In the second half of October 2002 the VLBI campaign CONT02 was observed with 8 stations participating around the globe. At four of them (Gilmore Creek, U.S.A.; Hartebeesthoek, South Africa; Kokee Park, U.S.A.; Ny-Alesund, Norway) also total zenith delays from DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) are available and these estimates are compared with those from the IGS (International GPS Service) and the IVS. The distance from the DORIS beacons to the co-located GPS and VLBI stations is around 2 km or less for the four sites mentioned above.
Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.
Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A
2017-06-15
Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Salin, M. B.; Dosaev, A. S.; Konkov, A. I.; Salin, B. M.
2014-07-01
Numerical simulation methods are described for the spectral characteristics of an acoustic signal scattered by multiscale surface waves. The methods include the algorithms for calculating the scattered field by the Kirchhoff method and with the use of an integral equation, as well as the algorithms of surface waves generation with allowance for nonlinear hydrodynamic effects. The paper focuses on studying the spectrum of Bragg scattering caused by surface waves whose frequency exceeds the fundamental low-frequency component of the surface waves by several octaves. The spectrum broadening of the backscattered signal is estimated. The possibility of extending the range of applicability of the computing method developed under small perturbation conditions to cases characterized by a Rayleigh parameter of ≥1 is estimated.
USDA-ARS?s Scientific Manuscript database
In California and other regions vulnerable to water shortages, satellite-derived estimates of key hydrologic parameters can support agricultural producers and water managers in maximizing the benefits of available water supplies. The Satellite Irrigation Management Support (SIMS) project combines N...
ERIC Educational Resources Information Center
Wang, Shiyu; Yang, Yan; Culpepper, Steven Andrew; Douglas, Jeffrey A.
2018-01-01
A family of learning models that integrates a cognitive diagnostic model and a higher-order, hidden Markov model in one framework is proposed. This new framework includes covariates to model skill transition in the learning environment. A Bayesian formulation is adopted to estimate parameters from a learning model. The developed methods are…
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
Global and local Joule heating effects seen by DE 2
NASA Technical Reports Server (NTRS)
Heelis, R. A.; Coley, W. R.
1988-01-01
In the altitude region between 350 and 550 km, variations in the ion temperature principally reflect similar variations in the local frictional heating produced by a velocity difference between the ions and the neutrals. Here, the distribution of the ion temperature in this altitude region is shown, and its attributes in relation to previous work on local Joule heating rates are discussed. In addition to the ion temperature, instrumentation on the DE 2 satellite also provides a measure of the ion velocity vector representative of the total electric field. From this information, the local Joule heating rate is derived. From an estimate of the height-integrated Pedersen conductivity it is also possible to estimate the global (height-integrated) Joule heating rate. Here, the differences and relationships between these various parameters are described.
Star clusters: age, metallicity and extinction from integrated spectra
NASA Astrophysics Data System (ADS)
González Delgado, Rosa M.; Cid Fernandes, Roberto
2010-01-01
Integrated optical spectra of star clusters in the Magellanic Clouds and a few Galactic globular clusters are fitted using high-resolution spectral models for single stellar populations. The goal is to estimate the age, metallicity and extinction of the clusters, and evaluate the degeneracies among these parameters. Several sets of evolutionary models that were computed with recent high-spectral-resolution stellar libraries (MILES, GRANADA, STELIB), are used as inputs to the starlight code to perform the fits. The comparison of the results derived from this method and previous estimates available in the literature allow us to evaluate the pros and cons of each set of models to determine star cluster properties. In addition, we quantify the uncertainties associated with the age, metallicity and extinction determinations resulting from variance in the ingredients for the analysis.
Finite hedging in field theory models of interest rates
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Srikant, Marakani
2004-03-01
We use path integrals to calculate hedge parameters and efficacy of hedging in a quantum field theory generalization of the Heath, Jarrow, and Morton [Robert Jarrow, David Heath, and Andrew Morton, Econometrica 60, 77 (1992)] term structure model, which parsimoniously describes the evolution of imperfectly correlated forward rates. We calculate, within the model specification, the effectiveness of hedging over finite periods of time, and obtain the limiting case of instantaneous hedging. We use empirical estimates for the parameters of the model to show that a low-dimensional hedge portfolio is quite effective.
Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations
Palmiotti, Giuseppe; Salvatores, Massimo
2012-01-01
The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.
NASA Astrophysics Data System (ADS)
Bhushan, Awani; Panda, S. K.
2018-05-01
The influence of bimodularity (different stress ∼ strain behaviour in tension and compression) on fracture behaviour of graphite specimens has been studied with fracture toughness (KIc), critical J-integral (JIc) and critical strain energy release rate (GIc) as the characterizing parameter. Bimodularity index (ratio of tensile Young's modulus to compression Young's modulus) of graphite specimens has been obtained from the normalized test data of tensile and compression experimentation. Single edge notch bend (SENB) testing of pre-cracked specimens from the same lot have been carried out as per ASTM standard D7779-11 to determine the peak load and critical fracture parameters KIc, GIc and JIc using digital image correlation technology of crack opening displacements. Weibull weakest link theory has been used to evaluate the mean peak load, Weibull modulus and goodness of fit employing two parameter least square method (LIN2), biased (MLE2-B) and unbiased (MLE2-U) maximum likelihood estimator. The stress dependent elasticity problem of three-dimensional crack progression behaviour for the bimodular graphite components has been solved as an iterative finite element procedure. The crack characterizing parameters critical stress intensity factor and critical strain energy release rate have been estimated with the help of Weibull distribution plot between peak loads versus cumulative probability of failure. Experimental and Computational fracture parameters have been compared qualitatively to describe the significance of bimodularity. The bimodular influence on fracture behaviour of SENB graphite has been reflected on the experimental evaluation of GIc values only, which has been found to be different from the calculated JIc values. Numerical evaluation of bimodular 3D J-integral value is found to be close to the GIc value whereas the unimodular 3D J-value is nearer to the JIc value. The significant difference between the unimodular JIc and bimodular GIc indicates that GIc should be considered as the standard fracture parameter for bimodular brittle specimens.
2014-01-01
Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668
Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K
2014-04-04
This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.
Observed galaxy number counts on the lightcone up to second order: I. Main result
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertacca, Daniele; Maartens, Roy; Clarkson, Chris, E-mail: daniele.bertacca@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: chris.clarkson@gmail.com
2014-09-01
We present the galaxy number overdensity up to second order in redshift space on cosmological scales for a concordance model. The result contains all general relativistic effects up to second order that arise from observing on the past light cone, including all redshift effects, lensing distortions from convergence and shear, and contributions from velocities, Sachs-Wolfe, integrated SW and time-delay terms. This result will be important for accurate calculation of the bias on estimates of non-Gaussianity and on precision parameter estimates, introduced by nonlinear projection effects.
NASA Astrophysics Data System (ADS)
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
A New Approach for Identifying Ionospheric Gradients in the Context of the Gagan System
NASA Astrophysics Data System (ADS)
Kudala, Ravi Chandra
2012-10-01
The Indian Space Research Organization and the Airports Authority of India are jointly implementing the Global Positioning System (GPS) aided GEO Augmented Navigation (GAGAN) system in order to meet the following required navigation performance (RNP) parameters: integrity, continuity, accuracy, and availability (for aircraft operations). Such a system provides the user with orbit, clock, and ionospheric corrections in addition to ranging signals via the geostationary earth orbit satellite (GEOSAT). The equatorial ionization anomaly (EIA), due to rapid non-uniform electron-ion recombination that persists on the Indian subcontinent, causes ionospheric gradients. Ionospheric gradients represent the most severe threat to high-integrity differential GNSS systems such as GAGAN. In order to ensure integrity under conditions of an ionospheric storm, the following three objectives must be met: careful monitoring, error bounding, and sophisticated storm-front modeling. The first objective is met by continuously tracking data due to storms, and, on quiet days, determining precise estimates of the threat parameters from reference monitoring stations. The second objective is met by quantifying the above estimates of threat parameters due to storms through maximum and minimum typical thresholds. In the context GAGAN, this work proposes a new method for identifying ionospheric gradients, in addition to determining an appropriate upper bound, in order to sufficiently understand error during storm days. Initially, carrier phase data of the GAGAN network from Indian TEC stations for both storm and quiet days was used for estimating ionospheric spatial and temporal gradients (the vertical ionospheric gradient (σVIG) and the rate of the TEC index (ROTI), respectively) in multiple viewing directions. Along similar lines, using the carrier to noise ratio (C/N0) for the same data, the carrier to noise ratio index (σCNRI) was derived. Subsequently, the one-toone relationship between σVIG and σCNRI was examined. High values of σVIG were determined for strong noise signals and corresponded to minimal σCNRI, indicating poor phase estimations and, in turn, an erroneous location. On the other hand, low values of σVIG were produced for weak noise signals and corresponded to maximum σCNRI, indicating strong phase estimations and, in turn, accurate locations. In other words, if a gradient persists in the line of sight direction of GEOSAT for aviation users, the down link L- band signal itself becomes erroneous. As a result, the en-route aviation user fails to receive a SBAS correction message leading to deprivation for the main objective of GAGAN. On the other hand, since the proposed approach enhances the receivers of both the aviation user and the reference monitoring station in terms of their performance, based on σCNRI, the integrity of SBAS messages themselves can be analyzed and considered for forward corrections.
Brizuela Mendoza, Jorge Aurelio; Astorga Zaragoza, Carlos Manuel; Zavala Río, Arturo; Pattalochi, Leo; Canales Abarca, Francisco
2016-03-01
This paper deals with an observer design for Linear Parameter Varying (LPV) systems with high-order time-varying parameter dependency. The proposed design, considered as the main contribution of this paper, corresponds to an observer for the estimation of the actuator fault and the system state, considering measurement noise at the system outputs. The observer gains are computed by considering the extension of linear systems theory to polynomial LPV systems, in such a way that the observer reaches the characteristics of LPV systems. As a result, the actuator fault estimation is ready to be used in a Fault Tolerant Control scheme, where the estimated state with reduced noise should be used to generate the control law. The effectiveness of the proposed methodology has been tested using a riderless bicycle model with dependency on the translational velocity v, where the control objective corresponds to the system stabilization towards the upright position despite the variation of v along the closed-loop system trajectories. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Hydrological Relevant Parameters from Remote Sensing - Spatial Modelling Input and Validation Basis
NASA Astrophysics Data System (ADS)
Hochschild, V.
2012-12-01
This keynote paper will demonstrate how multisensoral remote sensing data is used as spatial input for mesoscale hydrological modeling as well as for sophisticated validation purposes. The tasks of Water Resources Management are subject as well as the role of remote sensing in regional catchment modeling. Parameters derived from remote sensing discussed in this presentation will be land cover, topographical information from digital elevation models, biophysical vegetation parameters, surface soil moisture, evapotranspiration estimations, lake level measurements, determination of snow covered area, lake ice cycles, soil erosion type, mass wasting monitoring, sealed area, flash flood estimation. The actual possibilities of recent satellite and airborne systems are discussed, as well as the data integration into GIS and hydrological modeling, scaling issues and quality assessment will be mentioned. The presentation will provide an overview of own research examples from Germany, Tibet and Africa (Ethiopia, South Africa) as well as other international research activities. Finally the paper gives an outlook on upcoming sensors and concludes the possibilities of remote sensing in hydrology.
Estimations of Mo X-pinch plasma parameters on QiangGuang-1 facility by L-shell spectral analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jian; Qiu, Aici; State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024
2013-08-15
Plasma parameters of molybdenum (Mo) X-pinches on the 1-MA QiangGuang-1 facility were estimated by L-shell spectral analysis. X-ray radiation from X-pinches had a pulsed width of 1 ns, and its spectra in 2–3 keV were measured with a time-integrated X-ray spectrometer. Relative intensities of spectral features were derived by correcting for the spectral sensitivity of the spectrometer. With an open source, atomic code FAC (flexible atomic code), ion structures, and various atomic radiative-collisional rates for O-, F-, Ne-, Na-, Mg-, and Al-like ionization stages were calculated, and synthetic spectra were constructed at given plasma parameters. By fitting the measured spectramore » with the modeled, Mo X-pinch plasmas on the QiangGuang-1 facility had an electron density of about 10{sup 21} cm{sup −3} and the electron temperature of about 1.2 keV.« less
NASA Astrophysics Data System (ADS)
Franch, B.; Skakun, S.; Vermote, E.; Roger, J. C.
2017-12-01
Surface albedo is an essential parameter not only for developing climate models, but also for most energy balance studies. While climate models are usually applied at coarse resolution, the energy balance studies, which are mainly focused on agricultural applications, require a high spatial resolution. The albedo, estimated through the angular integration of the BRDF, requires an appropriate angular sampling of the surface. However, Sentinel-2A sampling characteristics, with nearly constant observation geometry and low illumination variation, prevent from deriving a surface albedo product. In this work, we apply an algorithm developed to derive a Landsat surface albedo to Sentinel-2A. It is based on the BRDF parameters estimated from the MODerate Resolution Imaging Spectroradiometer (MODIS) CMG surface reflectance product (M{O,Y}D09) using the VJB method (Vermote et al., 2009). Sentinel-2A unsupervised classification images are used to disaggregate the BRDF parameters to the Sentinel-2 spatial resolution. We test the results over five different sites of the US SURFRAD network and plot the results versus albedo field measurements. Additionally, we also test this methodology using Landsat-8 images.
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2017-03-01
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Veronica M; Bartell, Scott M
2016-04-01
We recently utilized a suite of environmental fate and transport models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations, and also assessed the association of those concentrations with preeclampsia for participants in the C8 Health Project (a cross-sectional study of over 69,000 people who were environmentally exposed to PFOA near a major U.S. fluoropolymer production facility located in West Virginia). However, the exposure estimates from this integrated model relied on default values for key independent exposure parameters including water ingestion rates, the serum PFOA half-life, and the volume of distribution for PFOA. The aim of the present study is to assess the impact of inter-individual variability and epistemic uncertainty in these parameters on the exposure estimates and subsequently, the epidemiological association between PFOA exposure and preeclampsia. We used Monte Carlo simulation to propagate inter-individual variability/epistemic uncertainty in the exposure assessment and reanalyzed the epidemiological association. Inter-individual variability in these parameters mildly impacted the serum PFOA concentration predictions (the lowest mean rank correlation between the estimated serum concentrations in our study and the original predicted serum concentrations was 0.95) and there was a negligible impact on the epidemiological association with preeclampsia (no change in the mean adjusted odds ratio (AOR) and the contribution of exposure uncertainty to the total uncertainty including sampling variability was 7%). However, when epistemic uncertainty was added along with the inter-individual variability, serum PFOA concentration predictions and their association with preeclampsia were moderately impacted (the mean AOR of preeclampsia occurrence was reduced from 1.12 to 1.09, and the contribution of exposure uncertainty to the total uncertainty was increased up to 33%). In conclusion, our study shows that the change of the rank exposure among the study participants due to variability and epistemic uncertainty in the independent exposure parameters was large enough to cause a 25% bias towards the null. This suggests that the true AOR of the association between PFOA and preeclampsia in this population might be higher than the originally reported AOR and has more uncertainty than indicated by the originally reported confidence interval. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Modeling structured population dynamics using data from unmarked individuals
Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew
2014-01-01
The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
Hostetter, Nathan; Gardner, Beth; Evans, Allen F.; Cramer, Bradley M.; Payton, Quinn; Collis, Ken; Roby, Daniel D.
2017-01-01
We developed a state-space mark-recapture-recovery model that incorporates multiple recovery types and state uncertainty to estimate survival of an anadromous fish species. We apply the model to a dataset of out-migrating juvenile steelhead trout (Oncorhynchus mykiss) tagged with passive integrated transponders, recaptured during outmigration, and recovered on bird colonies in the Columbia River basin (2008-2014). Recoveries on bird colonies are often ignored in survival studies because the river reach of mortality is often unknown, which we model as a form of state uncertainty. Median outmigration survival from release to the lower river (river kilometer 729 to 75) ranged from 0.27 to 0.35, depending on year. Recovery probabilities were frequently >0.20 in the first river reach following tagging, indicating that one out of five fish that died in that reach was recovered on a bird colony. Integrating dead recovery data provided increased parameter precision, estimation of where birds consumed fish, and survival estimates across larger spatial scales. More generally, these modeling approaches provide a flexible framework to integrate multiple sources of tag recovery data into mark-recapture studies.
Resonance region measurements of dysprosium and rhenium
NASA Astrophysics Data System (ADS)
Leinweber, Gregory; Block, Robert C.; Epping, Brian E.; Barry, Devin P.; Rapp, Michael J.; Danon, Yaron; Donovan, Timothy J.; Landsberger, Sheldon; Burke, John A.; Bishop, Mary C.; Youmans, Amanda; Kim, Guinyun N.; Kang, yeong-rok; Lee, Man Woo; Drindak, Noel J.
2017-09-01
Neutron capture and transmission measurements have been performed, and resonance parameter analysis has been completed for dysprosium, Dy, and rhenium, Re. The 60 MeV electron accelerator at RPI Gaerttner LINAC Center produced neutrons in the thermal and epithermal energy regions for these measurements. Transmission measurements were made using 6Li glass scintillation detectors. The neutron capture measurements were made with a 16-segment NaI multiplicity detector. The detectors for all experiments were located at ≈25 m except for thermal transmission, which was done at ≈15 m. The dysprosium samples included one highly enriched 164Dy metal, 6 liquid solutions of enriched 164Dy, two natural Dy metals. The Re samples were natural metals. Their capture yield normalizations were corrected for their high gamma attenuation. The multi-level R-matrix Bayesian computer code SAMMY was used to extract the resonance parameters from the data. 164Dy resonance data were analyzed up to 550 eV, other Dy isotopes up to 17 eV, and Re resonance data up to 1 keV. Uncertainties due to resolution function, flight path, burst width, sample thickness, normalization, background, and zero time were estimated and propagated using SAMMY. An additional check of sample-to-sample consistency is presented as an estimate of uncertainty. The thermal total cross sections and neutron capture resonance integrals of 164Dy and Re were determined from the resonance parameters. The NJOY and INTER codes were used to process and integrate the cross sections. Plots of the data, fits, and calculations using ENDF/B-VII.1 resonance parameters are presented.
Bayesian estimation inherent in a Mexican-hat-type neural network
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Understanding the demographic drivers of realized population growth rates.
Koons, David N; Arnold, Todd W; Schaub, Michael
2017-10-01
Identifying the demographic parameters (e.g., reproduction, survival, dispersal) that most influence population dynamics can increase conservation effectiveness and enhance ecological understanding. Life table response experiments (LTRE) aim to decompose the effects of change in parameters on past demographic outcomes (e.g., population growth rates). But the vast majority of LTREs and other retrospective population analyses have focused on decomposing asymptotic population growth rates, which do not account for the dynamic interplay between population structure and vital rates that shape realized population growth rates (λt=Nt+1/Nt) in time-varying environments. We provide an empirical means to overcome these shortcomings by merging recently developed "transient life-table response experiments" with integrated population models (IPMs). IPMs allow for the estimation of latent population structure and other demographic parameters that are required for transient LTRE analysis, and Bayesian versions additionally allow for complete error propagation from the estimation of demographic parameters to derivations of realized population growth rates and perturbation analyses of growth rates. By integrating available monitoring data for Lesser Scaup over 60 yr, and conducting transient LTREs on IPM estimates, we found that the contribution of juvenile female survival to long-term variation in realized population growth rates was 1.6 and 3.7 times larger than that of adult female survival and fecundity, respectively. But a persistent long-term decline in fecundity explained 92% of the decline in abundance between 1983 and 2006. In contrast, an improvement in adult female survival drove the modest recovery in Lesser Scaup abundance since 2006, indicating that the most important demographic drivers of Lesser Scaup population dynamics are temporally dynamic. In addition to resolving uncertainty about Lesser Scaup population dynamics, the merger of IPMs with transient LTREs will strengthen our understanding of demography for many species as we aim to conserve biodiversity during an era of non-stationary global change. © 2017 by the Ecological Society of America.
Phobos laser ranging: Numerical Geodesy experiments for Martian system science
NASA Astrophysics Data System (ADS)
Dirkx, D.; Vermeersen, L. L. A.; Noomen, R.; Visser, P. N. A. M.
2014-09-01
Laser ranging is emerging as a technology for use over (inter)planetary distances, having the advantage of high (mm-cm) precision and accuracy and low mass and power consumption. We have performed numerical simulations to assess the science return in terms of geodetic observables of a hypothetical Phobos lander performing active two-way laser ranging with Earth-based stations. We focus our analysis on the estimation of Phobos and Mars gravitational, tidal and rotational parameters. We explicitly include systematic error sources in addition to uncorrelated random observation errors. This is achieved through the use of consider covariance parameters, specifically the ground station position and observation biases. Uncertainties for the consider parameters are set at 5 mm and at 1 mm for the Gaussian uncorrelated observation noise (for an observation integration time of 60 s). We perform the analysis for a mission duration up to 5 years. It is shown that a Phobos Laser Ranging (PLR) can contribute to a better understanding of the Martian system, opening the possibility for improved determination of a variety of physical parameters of Mars and Phobos. The simulations show that the mission concept is especially suited for estimating Mars tidal deformation parameters, estimating degree 2 Love numbers with absolute uncertainties at the 10-2 to 10-4 level after 1 and 4 years, respectively and providing separate estimates for the Martian quality factors at Sun and Phobos-forced frequencies. The estimation of Phobos libration amplitudes and gravity field coefficients provides an estimate of Phobos' relative equatorial and polar moments of inertia with an absolute uncertainty of 10-4 and 10-7, respectively, after 1 year. The observation of Phobos tidal deformation will be able to differentiate between a rubble pile and monolithic interior within 2 years. For all parameters, systematic errors have a much stronger influence (per unit uncertainty) than the uncorrelated Gaussian observation noise. This indicates the need for the inclusion of systematic errors in simulation studies and special attention to the mitigation of these errors in mission and system design.
NASA Astrophysics Data System (ADS)
Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.
2010-12-01
The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.
How tough is bone? Application of elastic-plastic fracture mechanics to bone.
Yan, Jiahau; Mecholsky, John J; Clifton, Kari B
2007-02-01
Bone, with a hierarchical structure that spans from the nano-scale to the macro-scale and a composite design composed of nano-sized mineral crystals embedded in an organic matrix, has been shown to have several toughening mechanisms that increases its toughness. These mechanisms can stop, slow, or deflect crack propagation and cause bone to have a moderate amount of apparent plastic deformation before fracture. In addition, bone contains a high volumetric percentage of organics and water that makes it behave nonlinearly before fracture. Many researchers used strength or critical stress intensity factor (fracture toughness) to characterize the mechanical property of bone. However, these parameters do not account for the energy spent in plastic deformation before bone fracture. To accurately describe the mechanical characteristics of bone, we applied elastic-plastic fracture mechanics to study bone's fracture toughness. The J integral, a parameter that estimates both the energies consumed in the elastic and plastic deformations, was used to quantify the total energy spent before bone fracture. Twenty cortical bone specimens were cut from the mid-diaphysis of bovine femurs. Ten of them were prepared to undergo transverse fracture and the other 10 were prepared to undergo longitudinal fracture. The specimens were prepared following the apparatus suggested in ASTM E1820 and tested in distilled water at 37 degrees C. The average J integral of the transverse-fractured specimens was found to be 6.6 kPa m, which is 187% greater than that of longitudinal-fractured specimens (2.3 kPa m). The energy spent in the plastic deformation of the longitudinal-fractured and transverse-fractured bovine specimens was found to be 3.6-4.1 times the energy spent in the elastic deformation. This study shows that the toughness of bone estimated using the J integral is much greater than the toughness measured using the critical stress intensity factor. We suggest that the J integral method is a better technique in estimating the toughness of bone.
Coates, Peter S.; Prochazka, Brian G.; Ricca, Mark A.; Halstead, Brian J.; Casazza, Michael L.; Blomberg, Erik J.; Brussee, Brianne E.; Wiechman, Lief; Tebbenkamp, Joel; Gardner, Scott C.; Reese, Kerry P.
2018-01-01
Consideration of ecological scale is fundamental to understanding and managing avian population growth and decline. Empirically driven models for population dynamics and demographic processes across multiple spatial scales can be powerful tools to help guide conservation actions. Integrated population models (IPMs) provide a framework for better parameter estimation by unifying multiple sources of data (e.g., count and demographic data). Hierarchical structure within such models that include random effects allow for varying degrees of data sharing across different spatiotemporal scales. We developed an IPM to investigate Greater Sage-Grouse (Centrocercus urophasianus) on the border of California and Nevada, known as the Bi-State Distinct Population Segment. Our analysis integrated 13 years of lek count data (n > 2,000) and intensive telemetry (VHF and GPS; n > 350 individuals) data across 6 subpopulations. Specifically, we identified the most parsimonious models among varying random effects and density-dependent terms for each population vital rate (e.g., nest survival). Using a joint likelihood process, we integrated the lek count data with the demographic models to estimate apparent abundance and refine vital rate parameter estimates. To investigate effects of climatic conditions, we extended the model to fit a precipitation covariate for instantaneous rate of change (r). At a metapopulation extent (i.e. Bi-State), annual population rate of change λ (er) did not favor an overall increasing or decreasing trend through the time series. However, annual changes in λ were driven by changes in precipitation (one-year lag effect). At subpopulation extents, we identified substantial variation in λ and demographic rates. One subpopulation clearly decoupled from the trend at the metapopulation extent and exhibited relatively high risk of extinction as a result of low egg fertility. These findings can inform localized, targeted management actions for specific areas, and status of the species for the larger Bi-State.
NASA Technical Reports Server (NTRS)
Canfield, Stephen
1999-01-01
This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.
Quantification of Fluorine Content in AFFF Concentrates
2017-09-29
and quantitative integrations, a 100 ppm spectral window (FIDRes 0.215 Hz) was scanned using the following acquisition parameters: acquisition time ...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6120--17-9752 Quantification of Fluorine Content in AFFF Concentrates September 29, 2017...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
The CHIC Model: A Global Model for Coupled Binary Data
ERIC Educational Resources Information Center
Wilderjans, Tom; Ceulemans, Eva; Van Mechelen, Iven
2008-01-01
Often problems result in the collection of coupled data, which consist of different N-way N-mode data blocks that have one or more modes in common. To reveal the structure underlying such data, an integrated modeling strategy, with a single set of parameters for the common mode(s), that is estimated based on the information in all data blocks, may…
Test-retest reliability of effective connectivity in the face perception network.
Frässle, Stefan; Paulus, Frieder Michel; Krach, Sören; Jansen, Andreas
2016-02-01
Computational approaches have great potential for moving neuroscience toward mechanistic models of the functional integration among brain regions. Dynamic causal modeling (DCM) offers a promising framework for inferring the effective connectivity among brain regions and thus unraveling the neural mechanisms of both normal cognitive function and psychiatric disorders. While the benefit of such approaches depends heavily on their reliability, systematic analyses of the within-subject stability are rare. Here, we present a thorough investigation of the test-retest reliability of an fMRI paradigm for DCM analysis dedicated to unraveling intra- and interhemispheric integration among the core regions of the face perception network. First, we examined the reliability of face-specific BOLD activity in 25 healthy volunteers, who performed a face perception paradigm in two separate sessions. We found good to excellent reliability of BOLD activity within the DCM-relevant regions. Second, we assessed the stability of effective connectivity among these regions by analyzing the reliability of Bayesian model selection and model parameter estimation in DCM. Reliability was excellent for the negative free energy and good for model parameter estimation, when restricting the analysis to parameters with substantial effect sizes. Third, even when the experiment was shortened, reliability of BOLD activity and DCM results dropped only slightly as a function of the length of the experiment. This suggests that the face perception paradigm presented here provides reliable estimates for both conventional activation and effective connectivity measures. We conclude this paper with an outlook on potential clinical applications of the paradigm for studying psychiatric disorders. Hum Brain Mapp 37:730-744, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Dolphin biosonar target detection in noise: wrap up of a past experiment.
Au, Whitlow W L
2014-07-01
The target detection capability of bottlenose dolphins in the presence of artificial masking noise was first studied by Au and Penner [J. Acoust. Soc. Am. 70, 687-693 (1981)] in which the dolphins' target detection threshold was determined as a function of the ratio of the echo energy flux density and the estimated received noise spectral density. Such a metric was commonly used in human psychoacoustics despite the fact that the echo energy flux density is not compatible with noise spectral density which is averaged intensity per Hz. Since the earlier detection in noise studies, two important parameters, the dolphin integration time applicable to broadband clicks and the dolphin's auditory filter shape, were determined. The inclusion of these two parameters allows for the estimation of the received energy flux density of the masking noise so that the dolphin target detection can now be determined as a function of the ratio of the received energy of the echo over the received noise energy. Using an integration time of 264 μs and an auditory bandwidth of 16.7 kHz, the ratio of the echo energy to noise energy at the target detection threshold is approximately 1 dB.
Measurement of Scattering and Absorption Cross Sections of Dyed Microspheres
Gaigalas, Adolfas K; Choquette, Steven; Zhang, Yu-Zhong
2013-01-01
Measurements of absorbance and fluorescence emission were carried out on aqueous suspensions of polystyrene (PS) microspheres with a diameter of 2.5 µm using a spectrophotometer with an integrating sphere detector. The apparatus and the principles of measurements were described in our earlier publications. Microspheres with and without green BODIPY@ dye were measured. Placing the suspension inside an integrating sphere (IS) detector of the spectrophotometer yielded (after a correction for fluorescence emission) the absorbance (called A in the text) due to absorption by BODIPY@ dye inside the microsphere. An estimate of the absorbance due to scattering alone was obtained by subtracting the corrected BODIPY@ dye absorbance (A) from the measured absorbance of a suspension placed outside the IS detector (called A1 in the text). The absorption of the BODIPY@ dye inside the microsphere was analyzed using an imaginary index of refraction parameterized with three Gaussian-Lorentz functions. The Kramer-Kronig relation was used to estimate the contribution of the BODIPY@ dye to the real part of the microsphere index of refraction. The complex index of refraction, obtained from the analysis of A, was used to analyze the absorbance due to scattering ((A1- A) in the text). In practice, the analysis of the scattering absorbance, A1-A, and the absorbance, A, was carried out in an iterative manner. It was assumed that A depended primarily on the imaginary part of the microsphere index of refraction with the other parameters playing a secondary role. Therefore A was first analyzed using values of the other parameters obtained from a fit to the absorbance due to scattering, A1-A, with the imaginary part neglected. The imaginary part obtained from the analysis of A was then used to reanalyze A1-A, and obtain better estimates of the other parameters. After a few iterations, consistent estimates were obtained of the scattering and absorption cross sections in the wavelength region 300 nm to 800 nm. PMID:26401422
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
An Index and Test of Linear Moderated Mediation.
Hayes, Andrew F
2015-01-01
I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.
Nondestructive Assay Data Integration with the SKB-50 Assemblies - FY16 Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, Stephen Joseph; Fugate, Michael Lynn; Trellue, Holly Renee
2016-10-28
A project to research the application of non-destructive assay (NDA) techniques for spent fuel assemblies is underway at the Central Interim Storage Facility for Spent Nuclear Fuel (for which the Swedish acronym is Clab) in Oskarshamn, Sweden. The research goals of this project contain both safeguards and non-safeguards interests. These nondestructive assay (NDA) technologies are designed to strengthen the technical toolkit of safeguard inspectors and others to determine the following technical goals more accurately; Verify initial enrichment, burnup, and cooling time of facility declaration for spent fuel assemblies; Detect replaced or missing pins from a given spent fuel assembly tomore » confirm its integrity; and Estimate plutonium mass and related plutonium and uranium fissile mass parameters in spent fuel assemblies. Estimate heat content, and measure reactivity (multiplication).« less
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Orientation estimation algorithm applied to high-spin projectiles
NASA Astrophysics Data System (ADS)
Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.
2014-06-01
High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.
NASA Astrophysics Data System (ADS)
Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.
2004-12-01
This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.
Real time estimation of ship motions using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Triantafyllou, M. S.; Bodson, M.; Athans, M.
1983-01-01
The estimation of the heave, pitch, roll, sway, and yaw motions of a DD-963 destroyer is studied, using Kalman filtering techniques, for application in VTOL aircraft landing. The governing equations are obtained from hydrodynamic considerations in the form of linear differential equations with frequency dependent coefficients. In addition, nonminimum phase characteristics are obtained due to the spatial integration of the water wave forces. The resulting transfer matrix function is irrational and nonminimum phase. The conditions for a finite-dimensional approximation are considered and the impact of the various parameters is assessed. A detailed numerical application for a DD-963 destroyer is presented and simulations of the estimations obtained from Kalman filters are discussed.
Migault, Vincent; Pallas, Benoît; Costes, Evelyne
2016-01-01
In crops, optimizing target traits in breeding programs can be fostered by selecting appropriate combinations of architectural traits which determine light interception and carbon acquisition. In apple tree, architectural traits were observed to be under genetic control. However, architectural traits also result from many organogenetic and morphological processes interacting with the environment. The present study aimed at combining a FSPM built for apple tree, MAppleT, with genetic determinisms of architectural traits, previously described in a bi-parental population. We focused on parameters related to organogenesis (phyllochron and immediate branching) and morphogenesis processes (internode length and leaf area) during the first year of tree growth. Two independent datasets collected in 2004 and 2007 on 116 genotypes, issued from a 'Starkrimson' × 'Granny Smith' cross, were used. The phyllochron was estimated as a function of thermal time and sylleptic branching was modeled subsequently depending on phyllochron. From a genetic map built with SNPs, marker effects were estimated on four MAppleT parameters with rrBLUP, using 2007 data. These effects were then considered in MAppleT to simulate tree development in the two climatic conditions. The genome wide prediction model gave consistent estimations of parameter values with correlation coefficients between observed values and estimated values from SNP markers ranging from 0.79 to 0.96. However, the accuracy of the prediction model following cross validation schemas was lower. Three integrative traits (the number of leaves, trunk length, and number of sylleptic laterals) were considered for validating MAppleT simulations. In 2007 climatic conditions, simulated values were close to observations, highlighting the correct simulation of genetic variability. However, in 2004 conditions which were not used for model calibration, the simulations differed from observations. This study demonstrates the possibility of integrating genome-based information in a FSPM for a perennial fruit tree. It also showed that further improvements are required for improving the prediction ability. Especially temperature effect should be extended and other factors taken into account for modeling GxE interactions. Improvements could also be expected by considering larger populations and by testing other genome wide prediction models. Despite these limitations, this study opens new possibilities for supporting plant breeding by in silico evaluations of the impact of genotypic polymorphisms on plant integrative phenotypes.
Gliozzi, T M; Turri, F; Manes, S; Cassinelli, C; Pizzi, F
2017-11-01
Within recent years, there has been growing interest in the prediction of bull fertility through in vitro assessment of semen quality. A model for fertility prediction based on early evaluation of semen quality parameters, to exclude sires with potentially low fertility from breeding programs, would therefore be useful. The aim of the present study was to identify the most suitable parameters that would provide reliable prediction of fertility. Frozen semen from 18 Italian Holstein-Friesian proven bulls was analyzed using computer-assisted semen analysis (CASA) (motility and kinetic parameters) and flow cytometry (FCM) (viability, acrosomal integrity, mitochondrial function, lipid peroxidation, plasma membrane stability and DNA integrity). Bulls were divided into two groups (low and high fertility) based on the estimated relative conception rate (ERCR). Significant differences were found between fertility groups for total motility, active cells, straightness, linearity, viability and percentage of DNA fragmented sperm. Correlations were observed between ERCR and some kinetic parameters, and membrane instability and some DNA integrity indicators. In order to define a model with high relation between semen quality parameters and ERCR, backward stepwise multiple regression analysis was applied. Thus, we obtained a prediction model that explained almost half (R 2=0.47, P<0.05) of the variation in the conception rate and included nine variables: five kinetic parameters measured by CASA (total motility, active cells, beat cross frequency, curvilinear velocity and amplitude of lateral head displacement) and four parameters related to DNA integrity evaluated by FCM (degree of chromatin structure abnormality Alpha-T, extent of chromatin structure abnormality (Alpha-T standard deviation), percentage of DNA fragmented sperm and percentage of sperm with high green fluorescence representative of immature cells). A significant relationship (R 2=0.84, P<0.05) was observed between real and predicted fertility. Once the accuracy of fertility prediction has been confirmed, the model developed in the present study could be used by artificial insemination centers for bull selection or for elimination of poor fertility ejaculates.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
Simulation studies of wide and medium field of view earth radiation data analysis
NASA Technical Reports Server (NTRS)
Green, R. N.
1978-01-01
A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.
Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B
2017-09-20
The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Marginal estimator for the aberrations of a space telescope by phase diversity
NASA Astrophysics Data System (ADS)
Blanc, Amandine; Mugnier, Laurent; Idier, Jérôme
2017-11-01
In this communication, we propose a novel method for estimating the aberrations of a space telescope from phase diversity data. The images recorded by such a telescope can be degraded by optical aberrations due to design, fabrication or misalignments. Phase diversity is a technique that allows the estimation of aberrations. The only estimator found in the relevant literature is based on a joint estimation of the aberrated phase and the observed object. We recall this approach and study the behavior of this joint estimator by means of simulations. We propose a novel marginal estimator of the sole phase. it is obtained by integrating the observed object out of the problem; indeed, this object is a nuisance parameter in our problem. This reduces drastically the number of unknown and provides better asymptotic properties. This estimator is implemented and its properties are validated by simulation. its performance is equal or even better than that of the joint estimator for the same computing cost.
Lee, Kyung-Won; Nam, Mi-Hyun; Lee, Hee-Ra; Hong, Chung-Oui; Lee, Kwang-Won
2017-07-19
Chebulic acid (CA) isolated from T. chebula, which has been reported for treating asthma, as a potent anti-oxidant resources. Exposure to ambient urban particulate matter (UPM) considered as a risk for cardiopulmonary vascular dysfunction. To investigate the protective effect of CA against UPM-mediated collapse of the pulmonary alveolar epithelial (PAE) cell (NCI-H441), barrier integrity parameters, and their elements were evaluated in PAE. CA was acquired from the laboratory previous reports. UPM was obtained from the National Institutes of Standards and Technology, and these were collected in St. Louis, MO, over a 24-month period and used as a standard reference. To confirm the protection of PAE barrier integrity, paracellular permeability and the junctional molecules were estimated with determination of transepithelial electrical resistance, Western Blotting, RT-PCR, and fluorescent staining. UPM aggravated the generation of reactive oxygen species (ROS) in PAE and also decreased mRNA and protein levels of junction molecules and barrier integrity in NCI-H441. However, CA repressed the ROS in PAE, also improved barrier integrity by protecting the junctional parameters in NCI-H411. These data showed that CA resulted in decreased UPM-induced ROS formation, and the protected the integrity of the tight junctions against UPM exposure to PAE barrier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...
2017-07-26
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
NASA Astrophysics Data System (ADS)
Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.
2017-07-01
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.
Adult survival and population growth rate in Colorado big brown bats (Eptesicus fuscus)
O'Shea, T.J.; Ellison, L.E.; Stanley, T.R.
2011-01-01
We studied adult survival and population growth at multiple maternity colonies of big brown bats (Eptesicus fuscus) in Fort Collins, Colorado. We investigated hypotheses about survival using information-theoretic methods and mark-recapture analyses based on passive detection of adult females tagged with passive integrated transponders. We constructed a 3-stage life-history matrix model to estimate population growth rate (??) and assessed the relative importance of adult survival and other life-history parameters to population growth through elasticity and sensitivity analysis. Annual adult survival at 5 maternity colonies monitored from 2001 to 2005 was estimated at 0.79 (95% confidence interval [95% CI] = 0.77-0.82). Adult survival varied by year and roost, with low survival during an extreme drought year, a finding with negative implications for bat populations because of the likelihood of increasing drought in western North America due to global climate change. Adult survival during winter was higher than in summer, and mean life expectancies calculated from survival estimates were lower than maximum longevity records. We modeled adult survival with recruitment parameter estimates from the same population. The study population was growing (?? = 1.096; 95% CI = 1.057-1.135). Adult survival was the most important demographic parameter for population growth. Growth clearly had the highest elasticity to adult survival, followed by juvenile survival and adult fecundity (approximately equivalent in rank). Elasticity was lowest for fecundity of yearlings. The relative importances of the various life-history parameters for population growth rate are similar to those of large mammals. ?? 2011 American Society of Mammalogists.
GPS Water Vapor Tomography Based on Accurate Estimations of the GPS Tropospheric Parameters
NASA Astrophysics Data System (ADS)
Champollion, C.; Masson, F.; Bock, O.; Bouin, M.; Walpersdorf, A.; Doerflinger, E.; van Baelen, J.; Brenot, H.
2003-12-01
The Global Positioning System (GPS) is now a common technique for the retrieval of zenithal integrated water vapor (IWV). Further applications in meteorology need also slant integrated water vapor (SIWV) which allow to precisely define the high variability of tropospheric water vapor at different temporal and spatial scales. Only precise estimations of IWV and horizontal gradients allow the estimation of accurate SIWV. We present studies developed to improve the estimation of tropospheric water vapor from GPS data. Results are obtained from several field experiments (MAP, ESCOMPTE, OHM-CV, IHOP, .). First IWV are estimated using different GPS processing strategies and results are compared to radiosondes. The role of the reference frame and the a priori constraints on the coordinates of the fiducial and local stations is generally underestimated. It seems to be of first order in the estimation of the IWV. Second we validate the estimated horizontal gradients comparing zenith delay gradients and single site gradients. IWV, gradients and post-fit residuals are used to construct slant integrated water delays. Validation of the SIWV is under progress comparing GPS SIWV, Lidar measurements and high resolution meteorological models (Meso-NH). A careful analysis of the post-fit residuals is needed to separate tropospheric signal from multipaths. The slant tropospheric delays are used to study the 3D heterogeneity of the troposphere. We develop a tomographic software to model the three-dimensional distribution of the tropospheric water vapor from GPS data. The software is applied to the ESCOMPTE field experiment, a dense network of 17 dual frequency GPS receivers operated in southern France. Three inversions have been successfully compared to three successive radiosonde launches. Good resolution is obtained up to heights of 3000 m.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
Lidars for smoke and dust cloud diagnostics
NASA Astrophysics Data System (ADS)
Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.
1980-11-01
An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.
NASA Astrophysics Data System (ADS)
Ojeda, David; Le Rolle, Virginie; Tse Ve Koon, Kevin; Thebault, Christophe; Donal, Erwan; Hernández, Alfredo I.
2013-11-01
In this paper, lumped-parameter models of the cardiovascular system, the cardiac electrical conduction system and a pacemaker are coupled to generate mitral ow pro les for di erent atrio-ventricular delay (AVD) con gurations, in the context of cardiac resynchronization therapy (CRT). First, we perform a local sensitivity analysis of left ventricular and left atrial parameters on mitral ow characteristics, namely E and A wave amplitude, mitral ow duration, and mitral ow time integral. Additionally, a global sensitivity analysis over all model parameters is presented to screen for the most relevant parameters that a ect the same mitral ow characteristics. Results provide insight on the in uence of left ventricle and atrium in uence on mitral ow pro les. This information will be useful for future parameter estimation of the model that could reproduce the mitral ow pro les and cardiovascular hemodynamics of patients undergoing AVD optimization during CRT.
Building a Predictive Capability for Decision-Making that Supports MultiPEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua Daniel
Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.
Source Parameter Estimation using the Second-order Closure Integrated Puff Model
The sensor measurements are categorized as triggered and non-triggered based on the recorded concentration measurements and a threshold...concentration value. Using each measured value, sources of adjoint material are created from the triggered and non-triggered sensors, and the adjoint transport...equations are solved to predict the adjoint concentration fields. The adjoint source strength is inversely proportional to the concentration measurement
NASA Technical Reports Server (NTRS)
1974-01-01
All general purpose equipment items contained in the final carry-on laboratory (COL) design concepts are described in terms of specific requirements identified for COL use, hardware status, and technical parameters such as weight, volume, power, range, and precision. Estimated costs for each item are given, along with projected development times.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, Steven M.
2001-01-01
Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.
Assessment of the integrity of concrete bridge structures by acoustic emission technique
NASA Astrophysics Data System (ADS)
Yoon, Dong-Jin; Park, Philip; Jung, Juong-Chae; Lee, Seung-Seok
2002-06-01
This study was aimed at developing a new method for assessing the integrity of concrete structures. Especially acoustic emission technique was used in carrying out both laboratory experiment and field application. From the previous laboratory study, we confirmed that AE analysis provided a promising approach for estimating the level of damage and distress in concrete structures. The Felicity ratio, one of the key parameter for assessing damage, exhibits a favorable correlation with the overall damage level. The total number of AE events under stepwise cyclic loading also showed a good agreement with the damage level. In this study, a new suggested technique was applied to several concrete bridges in Korea in order to verify the applicability in field. The AE response was analyzed to obtain key parameters such as the total number and rate of AE events, AE parameter analysis for each event, and the characteristic features of the waveform as well as Felicity ratio analysis. Stepwise loading-unloading procedure for AE generation was introduced in field test by using each different weight of vehicle. According to the condition of bridge, for instance new or old bridge, AE event rate and AE generation behavior indicated many different aspects. The results showed that the suggested analyzing method would be a promising approach for assessing the integrity of concrete structures.
Pagès, Loïc; Picon-Cochard, Catherine
2014-10-01
Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.
An integrated control scheme for space robot after capturing non-cooperative target
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-06-01
How to identify the mass properties and eliminate the unknown angular momentum of space robotic system after capturing a non-cooperative target is of great challenge. This paper focuses on designing an integrated control framework which includes detumbling strategy, coordination control and parameter identification. Firstly, inverted and forward chain approaches are synthesized for space robot to obtain dynamic equation in operational space. Secondly, a detumbling strategy is introduced using elementary functions with normalized time, while the imposed end-effector constraints are considered. Next, a coordination control scheme for stabilizing both base and end-effector based on impedance control is implemented with the target's parameter uncertainty. With the measurements of the forces and torques exerted on the target, its mass properties are estimated during the detumbling process accordingly. Simulation results are presented using a 7 degree-of-freedom kinematically redundant space manipulator, which verifies the performance and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Deep Unfolding for Topic Models.
Chien, Jen-Tzung; Lee, Chao-Hsi
2018-02-01
Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
Simple satellite orbit propagator
NASA Astrophysics Data System (ADS)
Gurfil, P.
2008-06-01
An increasing number of space missions require on-board autonomous orbit determination. The purpose of this paper is to develop a simple orbit propagator (SOP) for such missions. Since most satellites are limited by the available processing power, it is important to develop an orbit propagator that will use limited computational and memory resources. In this work, we show how to choose state variables for propagation using the simplest numerical integration scheme available-the explicit Euler integrator. The new state variables are derived by the following rationale: Apply a variation-of-parameters not on the gravity-affected orbit, but rather on the gravity-free orbit, and teart the gravity as a generalized force. This ultimately leads to a state vector comprising the inertial velocity and a modified position vector, wherein the product of velocity and time is subtracted from the inertial position. It is shown that the explicit Euler integrator, applied on the new state variables, becomes a symplectic integrator, preserving the Hamiltonian and the angular momentum (or a component thereof in the case of oblateness perturbations). The main application of the proposed propagator is estimation of mean orbital elements. It is shown that the SOP is capable of estimating the mean elements with an accuracy that is comparable to a high-order integrator that consumes an order-of-magnitude more computational time than the SOP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A
In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the datamore » analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.« less
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Janković, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
NASA Astrophysics Data System (ADS)
Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb
2014-01-01
The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
Analysis of neutron spectrum effects on primary damage in tritium breeding blankets
NASA Astrophysics Data System (ADS)
Choi, Yong Hee; Joo, Han Gyu
2012-07-01
The effect of neutron spectrum on primary damages in a structural material of a tritium breeding blanket is investigated with a newly established recoil spectrum estimation system. First, a recoil spectrum generation code is developed to obtain the energy spectrum of primary knock-on atoms (PKAs) for a given neutron spectrum utilizing the latest ENDF/B data. Secondly, a method for approximating the high energy tail of the recoil spectrum is introduced to avoid expensive molecular dynamics calculations for high energy PKAs using the concept of recoil energy of the secondary knock-on atoms originated by the INtegration of CAScades (INCAS) model. Thirdly, the modified spectrum is combined with a set of molecular dynamics calculation results to estimate the primary damage parameters such as the number of surviving point defects. Finally, the neutron spectrum is varied by changing the material of the spectral shifter and the result in primary damage parameters is examined.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
NASA Astrophysics Data System (ADS)
Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.
2017-12-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related the GIS variables to the VFM parameters and predictions. Future work will explore applications at larger scales with direct integration of the statistical prediction model with the mechanistic VFM.
NASA Technical Reports Server (NTRS)
Lissauer, Jack J.; Rivera, Eugenio J.; DeVincenzi, Donald (Technical Monitor)
2001-01-01
We present results of long-term numerical orbital integrations designed to test the stability of the three-planet system orbiting upsilon Andromedae and short-term integrations to test whether mutual perturbations among the planets can be used to determine planetary masses. Our initial conditions are based on recent fits to the radial velocity data obtained by the planet search group at Lick Observatory. The new fits result in significantly more stable systems than did the initially announced planetary parameters. Our integrations using the 2000 February parameters show that if the system is nearly planar, then it is stable for at least 100 Myr for m(sub f) = 1/sin i less than or = 4. In some stable systems, the eccentricity of the inner planet experiences large oscillations. The relative periastra of the outer two planets' orbits librate about 0 deg. in most of the stable systems; if future observations imply that the periastron longitudes of these planets are very closely aligned at the present epoch, dynamical simulations may provide precise estimates for the masses and orbital inclinations of these two planets.
NASA Astrophysics Data System (ADS)
Tsai, Nan-Chyuan; Sue, Chung-Yang
2010-02-01
Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.
NASA Technical Reports Server (NTRS)
Levine, H.
1982-01-01
The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.
NASA Astrophysics Data System (ADS)
He, Shaoming; Wang, Jiang; Wang, Wei
2017-12-01
This paper proposes a new composite guidance law to intercept manoeuvring targets without line-of-sight (LOS) angular rate information in the presence of autopilot lag. The presented formulation is obtained via a combination of homogeneous theory and sliding mode control approach. Different from some existing observers, the proposed homogeneous observer can estimate the lumped uncertainty and the LOS angular rate in an integrated manner. To reject the mismatched lumped uncertainty in the integrated guidance and autopilot system, a sliding surface, which consists of the system states and the estimated states, is proposed and a robust guidance law is then synthesised. Stability analysis shows that the LOS angular rate can be stabilised in a small region around zero asymptotically and the upper bound can be lowered by appropriate parameter choice. Numerical simulations with some comparisons are carried out to demonstrate the superiority of the proposed method.
On the CCN (de)activation nonlinearities
NASA Astrophysics Data System (ADS)
Arabas, Sylwester; Shima, Shin-ichiro
2017-09-01
We take into consideration the evolution of particle size in a monodisperse aerosol population during activation and deactivation of cloud condensation nuclei (CCN). Our analysis reveals that the system undergoes a saddle-node bifurcation and a cusp catastrophe. The control parameters chosen for the analysis are the relative humidity and the particle concentration. An analytical estimate of the activation timescale is derived through estimation of the time spent in the saddle-node bifurcation bottleneck. Numerical integration of the system coupled with a simple air-parcel cloud model portrays two types of activation/deactivation hystereses: one associated with the kinetic limitations on droplet growth when the system is far from equilibrium, and one occurring close to equilibrium and associated with the cusp catastrophe. We discuss the presented analyses in context of the development of particle-based models of aerosol-cloud interactions in which activation and deactivation impose stringent time-resolution constraints on numerical integration.
Forecasting the mortality rates of Malaysian population using Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd
2017-08-01
Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.
Faisal, Kamil; Shaker, Ahmed
2017-03-07
Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice.
Faisal, Kamil; Shaker, Ahmed
2017-01-01
Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong
2018-05-19
In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.
NASA Astrophysics Data System (ADS)
Sharaf El Din, Essam; Zhang, Yun
2017-10-01
Traditional surface water quality assessment is costly, labor intensive, and time consuming; however, remote sensing has the potential to assess surface water quality because of its spatiotemporal consistency. Therefore, estimating concentrations of surface water quality parameters (SWQPs) from satellite imagery is essential. Remote sensing estimation of nonoptical SWQPs, such as chemical oxygen demand (COD), biochemical oxygen demand (BOD), and dissolved oxygen (DO), has not yet been performed because they are less likely to affect signals measured by satellite sensors. However, concentrations of nonoptical variables may be correlated with optical variables, such as turbidity and total suspended sediments, which do affect the reflected radiation. In this context, an indirect relationship between satellite multispectral data and COD, BOD, and DO can be assumed. Therefore, this research attempts to develop an integrated Landsat 8 band ratios and stepwise regression to estimate concentrations of both optical and nonoptical SWQPs. Compared with previous studies, a significant correlation between Landsat 8 surface reflectance and concentrations of SWQPs was achieved and the obtained coefficient of determination (R2)>0.85. These findings demonstrated the possibility of using our technique to develop models to estimate concentrations of SWQPs and to generate spatiotemporal maps of SWQPs from Landsat 8 imagery.
Linking the Pilot Structural Model and Pilot Workload
NASA Technical Reports Server (NTRS)
Bachelder, Edward; Hess, Ronald; Aponso, Bimal; Godfroy-Cooper, Martine
2018-01-01
Behavioral models are developed that closely reproduced pulsive control response of two pilots using markedly different control techniques while conducting a tracking task. An intriguing find was that the pilots appeared to: 1) produce a continuous, internally-generated stick signal that they integrated in time; 2) integrate the actual stick position; and 3) compare the two integrations to either issue or cease a pulse command. This suggests that the pilots utilized kinesthetic feedback in order to sense and integrate stick position, supporting the hypothesis that pilots can access and employ the proprioceptive inner feedback loop proposed by Hess's pilot Structural Model. A Pilot Cost Index was developed, whose elements include estimated workload, performance, and the degree to which the pilot employs kinesthetic feedback. Preliminary results suggest that a pilot's operating point (parameter values) may be based on control style and index minimization.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
NASA Astrophysics Data System (ADS)
Fasnacht, Marc
We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Russa, D
Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributionsmore » found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.« less
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
The Kormendy relation of galaxies in the Frontier Fields clusters: Abell S1063 and MACS J1149.5+2223
NASA Astrophysics Data System (ADS)
Tortorelli, Luca; Mercurio, Amata; Paolillo, Maurizio; Rosati, Piero; Gargiulo, Adriana; Gobat, Raphael; Balestra, Italo; Caminha, G. B.; Annunziatella, Marianna; Grillo, Claudio; Lombardi, Marco; Nonino, Mario; Rettura, Alessandro; Sartoris, Barbara; Strazzullo, Veronica
2018-06-01
We analyse the Kormendy relations (KRs) of the two Frontier Fields clusters, Abell S1063, at z = 0.348, and MACS J1149.5+2223, at z = 0.542, exploiting very deep Hubble Space Telescope photometry and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopy. With this novel data set, we are able to investigate how the KR parameters depend on the cluster galaxy sample selection and how this affects studies of galaxy evolution based on the KR. We define and compare four different galaxy samples according to (a) Sérsic indices: early-type (`ETG'), (b) visual inspection: `ellipticals', (c) colours: `red', (d) spectral properties: `passive'. The classification is performed for a complete sample of galaxies with mF814W ≤ 22.5 ABmag (M* ≳ 1010.0 M⊙). To derive robust galaxy structural parameters, we use two methods: (1) an iterative estimate of structural parameters using images of increasing size, in order to deal with closely separated galaxies and (2) different background estimations, to deal with the intracluster light contamination. The comparison between the KRs obtained from the different samples suggests that the sample selection could affect the estimate of the best-fitting KR parameters. The KR built with ETGs is fully consistent with the one obtained for ellipticals and passive. On the other hand, the KR slope built on the red sample is only marginally consistent with those obtained with the other samples. We also release the photometric catalogue with structural parameters for the galaxies included in the present analysis.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mattsson, Brady J.; Runge, M.C.; Devries, J.H.; Boomer, G.S.; Eadie, J.M.; Haukos, D.A.; Fleskes, J.P.; Koons, D.N.; Thogmartin, W.E.; Clark, R.G.
2012-01-01
We developed and evaluated the performance of a metapopulation model enabling managers to examine, for the first time, the consequences of alternative management strategies involving habitat conditions and hunting on both harvest opportunity and carrying capacity (i.e., equilibrium population size in the absence of harvest) for migratory waterfowl at a continental scale. Our focus is on the northern pintail (Anas acuta; hereafter, pintail), which serves as a useful model species to examine the potential for integrating waterfowl harvest and habitat management in North America. We developed submodel structure capturing important processes for pintail populations during breeding, fall migration, winter, and spring migration while encompassing spatial structure representing three core breeding areas and two core nonbreeding areas. A number of continental-scale predictions from our baseline parameterization (e.g., carrying capacity of 5.5 million, equilibrium population size of 2.9 million and harvest rate of 12% at maximum sustained yield [MSY]) were within 10% of those from the pintail harvest strategy under current use by the U.S. Fish and Wildlife Service. To begin investigating the interaction of harvest and habitat management, we examined equilibrium population conditions for pintail at the continental scale across a range of harvest rates while perturbing model parameters to represent: (1) a 10% increase in breeding habitat quality in the Prairie Pothole population (PR); and (2) a 10% increase in nonbreeding habitat quantity along in the Gulf Coast (GC). Based on our model and analysis, a greater increase in carrying capacity and sustainable harvest was seen when increasing a proxy for habitat quality in the Prairie Pothole population. This finding and underlying assumptions must be critically evaluated, however, before specific management recommendations can be made. To make such recommendations, we require (1) extended, refined submodels with additional parameters linking influences of habitat management and environmental conditions to key life-history parameters; (2) a formal sensitivity analysis of the revised model; (3) an integrated population model that incorporates empirical data for estimating key vital rates; and (4) cost estimates for changing these additional parameters through habitat management efforts. We foresee great utility in using an integrated modeling approach to predict habitat and harvest management influences on continental-scale population responses while explicitly considering putative effects of climate change. Such a model could be readily adapted for management of many habitat-limited species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathaye, Jayant A.
2000-04-01
Integrated assessment (IA) modeling of climate policy is increasingly global in nature, with models incorporating regional disaggregation. The existing empirical basis for IA modeling, however, largely arises from research on industrialized economies. Given the growing importance of developing countries in determining long-term global energy and carbon emissions trends, filling this gap with improved statistical information on developing countries' energy and carbon-emissions characteristics is an important priority for enhancing IA modeling. Earlier research at LBNL on this topic has focused on assembling and analyzing statistical data on productivity trends and technological change in the energy-intensive manufacturing sectors of five developing countries,more » India, Brazil, Mexico, Indonesia, and South Korea. The proposed work will extend this analysis to the agriculture and electric power sectors in India, South Korea, and two other developing countries. They will also examine the impact of alternative model specifications on estimates of productivity growth and technological change for each of the three sectors, and estimate the contribution of various capital inputs--imported vs. indigenous, rigid vs. malleable-- in contributing to productivity growth and technological change. The project has already produced a data resource on the manufacturing sector which is being shared with IA modelers. This will be extended to the agriculture and electric power sectors, which would also be made accessible to IA modeling groups seeking to enhance the empirical descriptions of developing country characteristics. The project will entail basic statistical and econometric analysis of productivity and energy trends in these developing country sectors, with parameter estimates also made available to modeling groups. The parameter estimates will be developed using alternative model specifications that could be directly utilized by the existing IAMs for the manufacturing, agriculture, and electric power sectors.« less
NASA Astrophysics Data System (ADS)
Imani Masouleh, Mehdi; Limebeer, David J. N.
2018-07-01
In this study we will estimate the region of attraction (RoA) of the lateral dynamics of a nonlinear single-track vehicle model. The tyre forces are approximated using rational functions that are shown to capture the nonlinearities of tyre curves significantly better than polynomial functions. An existing sum-of-squares (SOS) programming algorithm for estimating regions of attraction is extended to accommodate the use of rational vector fields. This algorithm is then used to find an estimate of the RoA of the vehicle lateral dynamics. The influence of vehicle parameters and driving conditions on the stability region are studied. It is shown that SOS programming techniques can be used to approximate the stability region without resorting to numerical integration. The RoA estimate from the SOS algorithm is compared to the existing results in the literature. The proposed method is shown to obtain significantly better RoA estimates.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P
2015-01-01
The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.
Gas Path On-line Fault Diagnostics Using a Nonlinear Integrated Model for Gas Turbine Engines
NASA Astrophysics Data System (ADS)
Lu, Feng; Huang, Jin-quan; Ji, Chun-sheng; Zhang, Dong-dong; Jiao, Hua-bin
2014-08-01
Gas turbine engine gas path fault diagnosis is closely related technology that assists operators in managing the engine units. However, the performance gradual degradation is inevitable due to the usage, and it result in the model mismatch and then misdiagnosis by the popular model-based approach. In this paper, an on-line integrated architecture based on nonlinear model is developed for gas turbine engine anomaly detection and fault diagnosis over the course of the engine's life. These two engine models have different performance parameter update rate. One is the nonlinear real-time adaptive performance model with the spherical square-root unscented Kalman filter (SSR-UKF) producing performance estimates, and the other is a nonlinear baseline model for the measurement estimates. The fault detection and diagnosis logic is designed to discriminate sensor fault and component fault. This integration architecture is not only aware of long-term engine health degradation but also effective to detect gas path performance anomaly shifts while the engine continues to degrade. Compared to the existing architecture, the proposed approach has its benefit investigated in the experiment and analysis.
ROI Analysis of the System Architecture Virtual Integration Initiative
2018-04-01
The ROI anal- ysis uses conservative estimates of costs and benefits, especially for those parameters that have a proven, strong correlation to overall...formula: • In Section 3, we discuss the exponential growth of avionics software systems in terms of SLOC by analyzing the historical data to correlate ...which implies that the system has good structure (high cohesion, low coupling), good ap- plication clarity (good correlation between program and
A historical overview of flight flutter testing
NASA Technical Reports Server (NTRS)
Kehoe, Michael W.
1995-01-01
This paper reviews the test techniques developed over the last several decades for flight flutter testing of aircraft. Structural excitation systems, instrumentation systems, digital data preprocessing, and parameter identification algorithms (for frequency and damping estimates from the response data) are described. Practical experiences and example test programs illustrate the combined, integrated effectiveness of the various approaches used. Finally, comments regarding the direction of future developments and needs are presented.
NASA Astrophysics Data System (ADS)
Wang, Jun-Wei; Liu, Ya-Qiang; Hu, Yan-Yan; Sun, Chang-Yin
2017-12-01
This paper discusses the design problem of distributed H∞ Luenberger-type partial differential equation (PDE) observer for state estimation of a linear unstable parabolic distributed parameter system (DPS) with external disturbance and measurement disturbance. Both pointwise measurement in space and local piecewise uniform measurement in space are considered; that is, sensors are only active at some specified points or applied at part thereof of the spatial domain. The spatial domain is decomposed into multiple subdomains according to the location of the sensors such that only one sensor is located at each subdomain. By using Lyapunov technique, Wirtinger's inequality at each subdomain, and integration by parts, a Lyapunov-based design of Luenberger-type PDE observer is developed such that the resulting estimation error system is exponentially stable with an H∞ performance constraint, and presented in terms of standard linear matrix inequalities (LMIs). For the case of local piecewise uniform measurement in space, the first mean value theorem for integrals is utilised in the observer design development. Moreover, the problem of optimal H∞ observer design is also addressed in the sense of minimising the attenuation level. Numerical simulation results are presented to show the satisfactory performance of the proposed design method.
Silva Junqueira, Vinícius; de Azevedo Peixoto, Leonardo; Galvêas Laviola, Bruno; Lopes Bhering, Leonardo; Mendonça, Simone; Agostini Costa, Tania da Silveira; Antoniassi, Rosemar
2016-01-01
The biggest challenge for jatropha breeding is to identify superior genotypes that present high seed yield and seed oil content with reduced toxicity levels. Therefore, the objective of this study was to estimate genetic parameters for three important traits (weight of 100 seed, oil seed content, and phorbol ester concentration), and to select superior genotypes to be used as progenitors in jatropha breeding. Additionally, the genotypic values and the genetic parameters estimated under the Bayesian multi-trait approach were used to evaluate different selection indices scenarios of 179 half-sib families. Three different scenarios and economic weights were considered. It was possible to simultaneously reduce toxicity and increase seed oil content and weight of 100 seed by using index selection based on genotypic value estimated by the Bayesian multi-trait approach. Indeed, we identified two families that present these characteristics by evaluating genetic diversity using the Ward clustering method, which suggested nine homogenous clusters. Future researches must integrate the Bayesian multi-trait methods with realized relationship matrix, aiming to build accurate selection indices models. PMID:27281340
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-01-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences. PMID:12136032
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-07-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Worldwide Historical Estimates of Leaf Area Index, 1932-2000
NASA Technical Reports Server (NTRS)
Scurlock, J. M. O.; Asner, G. P.; Gower, S. T.
2001-01-01
Approximately 1000 published estimates of leaf area index (LAI) from nearly 400 unique field sites, covering the period 1932-2000, have been compiled into a single data set. LA1 is a key parameter for global and regional models of biosphere/atmosphere exchange of carbon dioxide, water vapor, and other materials. It also plays an integral role in determining the energy balance of the land surface. This data set provides a benchmark of typical values and ranges of LA1 for a variety of biomes and land cover types, in support of model development and validation of satellite-derived remote sensing estimates of LA1 and other vegetation parameters. The LA1 data are linked to a bibliography of over 300 originalsource references.This report documents the development of this data set, its contents, and its availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which were collected using a wide range of methodologies and assumptions that may not allow comparisons among sites.
An object correlation and maneuver detection approach for space surveillance
NASA Astrophysics Data System (ADS)
Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong
2012-10-01
Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Doña, Carolina; Chang, Ni-Bin; Caselles, Vicente; Sánchez, Juan M; Camacho, Antonio; Delegido, Jesús; Vannah, Benjamin W
2015-03-15
Lake eutrophication is a critical issue in the interplay of water supply, environmental management, and ecosystem conservation. Integrated sensing, monitoring, and modeling for a holistic lake water quality assessment with respect to multiple constituents is in acute need. The aim of this paper is to develop an integrated algorithm for data fusion and mining of satellite remote sensing images to generate daily estimates of some water quality parameters of interest, such as chlorophyll a concentrations and water transparency, to be applied for the assessment of the hypertrophic Albufera de Valencia. The Albufera de Valencia is the largest freshwater lake in Spain, which can often present values of chlorophyll a concentration over 200 mg m(-3) and values of transparency (Secchi Disk, SD) as low as 20 cm. Remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat Thematic Mapper (TM) and Enhance Thematic Mapper (ETM+) images were fused to carry out an integrative near-real time water quality assessment on a daily basis. Landsat images are useful to study the spatial variability of the water quality parameters, due to its spatial resolution of 30 m, in comparison to the low spatial resolution (250/500 m) of MODIS. While Landsat offers a high spatial resolution, the low temporal resolution of 16 days is a significant drawback to achieve a near real-time monitoring system. This gap may be bridged by using MODIS images that have a high temporal resolution of 1 day, in spite of its low spatial resolution. Synthetic Landsat images were fused for dates with no Landsat overpass over the study area. Finally, with a suite of ground truth data, a few genetic programming (GP) models were derived to estimate the water quality using the fused surface reflectance data as inputs. The GP model for chlorophyll a estimation yielded a R(2) of 0.94, with a Root Mean Square Error (RMSE) = 8 mg m(-3), and the GP model for water transparency estimation using Secchi disk showed a R(2) of 0.89, with an RMSE = 4 cm. With this effort, the spatiotemporal variations of water transparency and chlorophyll a concentrations may be assessed simultaneously on a daily basis throughout the lake for environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Berardo, Mattia; Lo Presti, Letizia
2016-07-02
In this work, a novel signal processing method is proposed to assist the Receiver Autonomous Integrity Monitoring (RAIM) module used in a receiver of Global Navigation Satellite Systems (GNSS) to improve the integrity of the estimated position. The proposed technique represents an evolution of the Multipath Distance Detector (MPDD), thanks to the introduction of a Signal Quality Index (SQI), which is both a metric able to evaluate the goodness of the signal, and a parameter used to improve the performance of the RAIM modules. Simulation results show the effectiveness of the proposed method.
Precise Orbital and Geodetic Parameter Estimation using SLR Observations for ILRS AAC
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Oh, Hyungjik Jay; Park, Sang-Young; Lim, Hyung-Chul; Park, Chandeok
2013-12-01
In this study, we present results of precise orbital geodetic parameter estimation using satellite laser ranging (SLR) observations for the International Laser Ranging Service (ILRS) associate analysis center (AAC). Using normal point observations of LAGEOS-1, LAGEOS-2, ETALON-1, and ETALON-2 in SLR consolidated laser ranging data format, the NASA/ GSFC GEODYN II and SOLVE software programs were utilized for precise orbit determination (POD) and finding solutions of a terrestrial reference frame (TRF) and Earth orientation parameters (EOPs). For POD, a weekly-based orbit determination strategy was employed to process SLR observations taken from 20 weeks in 2013. For solutions of TRF and EOPs, loosely constrained scheme was used to integrate POD results of four geodetic SLR satellites. The coordinates of 11 ILRS core sites were determined and daily polar motion and polar motion rates were estimated. The root mean square (RMS) value of post-fit residuals was used for orbit quality assessment, and both the stability of TRF and the precision of EOPs by external comparison were analyzed for verification of our solutions. Results of post-fit residuals show that the RMS of the orbits of LAGEOS-1 and LAGEOS-2 are 1.20 and 1.12 cm, and those of ETALON-1 and ETALON-2 are 1.02 and 1.11 cm, respectively. The stability analysis of TRF shows that the mean value of 3D stability of the coordinates of 11 ILRS core sites is 7.0 mm. An external comparison, with respect to International Earth rotation and Reference systems Service (IERS) 08 C04 results, shows that standard deviations of polar motion XP and YP are 0.754 milliarcseconds (mas) and 0.576 mas, respectively. Our results of precise orbital and geodetic parameter estimation are reasonable and help advance research at ILRS AAC.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Environmental and Genetic Factors Explain Differences in Intraocular Scattering.
Benito, Antonio; Hervella, Lucía; Tabernero, Juan; Pennos, Alexandros; Ginis, Harilaos; Sánchez-Romera, Juan F; Ordoñana, Juan R; Ruiz-Sánchez, Marcos; Marín, José M; Artal, Pablo
2016-01-01
To study the relative impact of genetic and environmental factors on the variability of intraocular scattering within a classical twin study. A total of 64 twin pairs, 32 monozygotic (MZ) (mean age: 54.9 ± 6.3 years) and 32 dizygotic (DZ) (mean age: 56.4 ± 7.0 years), were measured after a complete ophthalmologic exam had been performed to exclude all ocular pathologies that increase intraocular scatter as cataracts. Intraocular scattering was evaluated by using two different techniques based on a straylight parameter log(S) estimation: a compact optical instrument based in the principle of optical integration and a psychophysical measurement. Intraclass correlation coefficients (ICC) were used as descriptive statistics of twin resemblance, and genetic models were fitted to estimate heritability. No statistically significant difference was found for MZ and DZ groups for age (P = 0.203), best-corrected visual acuity (P = 0.626), cataract gradation (P = 0.701), sex (P = 0.941), optical log(S) (P = 0.386), or psychophysical log(S) (P = 0.568), with only a minor difference in equivalent sphere (P = 0.008). Intraclass correlation coefficients between siblings were similar for scatter parameters: 0.676 in MZ and 0.471 in DZ twins for optical log(S); 0.533 in MZ twins and 0.475 in DZ twins for psychophysical log(S). For equivalent sphere, ICCs were 0.767 in MZ and 0.228 in DZ twins. Conservative estimates of heritability for the measured scattering parameters were 0.39 and 0.20, respectively. Correlations of intraocular scatter (straylight) parameters in the groups of identical and nonidentical twins were similar. Heritability estimates were of limited magnitude, suggesting that genetic and environmental factors determine the variance of ocular straylight in healthy middle-aged adults.
NASA Astrophysics Data System (ADS)
Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.
2011-12-01
High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve the spatial structure in the inverse model, which leads to better parameter estimates and improved predictions when using the inverse-conditioned realizations of parameter fields.
Constraints on the rupture process of the 17 August 1999 Izmit earthquake
NASA Astrophysics Data System (ADS)
Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.
2003-04-01
Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.
NASA Astrophysics Data System (ADS)
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m and width of 2 m. The hillslope is equipped with six pairs of tensiometers and water content reflectometers, to monitor the pressure head and soil moisture content, respectively. Moreover, two tipping bucket flow gages were used to measure the surface and subsurface discharges at the outlet. A 12-day long experiment was carried out, during which a series of four rainfall events with constant rainfall rate were generated, interspersed with phases of drainage. During the experiment, measurements were collected at a relatively high resolution of 0.5 Hz. We report here on the capability of the data assimilation framework to estimate sets of plausible parameters that are consistent with the experimental setup.
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
Pérez-López, Paula; Montazeri, Mahdokht; Feijoo, Gumersindo; Moreira, María Teresa; Eckelman, Matthew J
2018-06-01
The economic and environmental performance of microalgal processes has been widely analyzed in recent years. However, few studies propose an integrated process-based approach to evaluate economic and environmental indicators simultaneously. Biodiesel is usually the single product and the effect of environmental benefits of co-products obtained in the process is rarely discussed. In addition, there is wide variation of the results due to inherent variability of some parameters as well as different assumptions in the models and limited knowledge about the processes. In this study, two standardized models were combined to provide an integrated simulation tool allowing the simultaneous estimation of economic and environmental indicators from a unique set of input parameters. First, a harmonized scenario was assessed to validate the joint environmental and techno-economic model. The findings were consistent with previous assessments. In a second stage, a Monte Carlo simulation was applied to evaluate the influence of variable and uncertain parameters in the model output, as well as the correlations between the different outputs. The simulation showed a high probability of achieving favorable environmental performance for the evaluated categories and a minimum selling price ranging from $11gal -1 to $106gal -1 . Greenhouse gas emissions and minimum selling price were found to have the strongest positive linear relationship, whereas eutrophication showed weak correlations with the other indicators (namely greenhouse gas emissions, cumulative energy demand and minimum selling price). Process parameters (especially biomass productivity and lipid content) were the main source of variation, whereas uncertainties linked to the characterization methods and economic parameters had limited effect on the results. Copyright © 2018 Elsevier B.V. All rights reserved.
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kauzlaric, Martina; Schädler, Bruno; Weingartner, Rolf
2014-05-01
The main objective of the MontanAqua transdisciplinary project is to develop strategies moving towards a more sustainable water resources management in the Crans-Montana-Sierre region (Valais, Switzerland) in view of global change. Therefore, a detailed assessment of the available water resources in the study area today and in the future is needed. The study region is situated in the inner alpine zone, with strong altitudinal precipitation gradients: from the precipitation rich alpine ridge down to the dry Rhône plain. A typical plateau glacier on top of the ridge is partly drained through the karstic underground formations and linked to various springs to either side of the water divide. The main anthropogenic influences on the system are reservoirs and diversions to the irrigation channels. Thus, the study area does not cover a classical hydrological basin as the water flows frequently across natural hydrographic boundaries. This is a big challenge from a hydrological point of view, as we cannot easily achieve a closed, measured water balance. Over and above, a lack of comprehensive historical data in the catchment reduces the degree of process conceptualization possible, as well as prohibits usual parameter estimation procedures. The Penn State Integrated Hydrologic Model (PIHM) (Kumar, 2009) has been selected to estimate the available natural water resource for the whole study area. It is a semi-discrete, physically-based model which includes: channel routing, overland flow, subsurface saturated and unsaturated flow, rainfall interception, snow melting and evapotranspiration. Its unstructured mesh decomposition offers a flexible domain decomposition strategy for efficient and accurate integration of the physiographic, climatic and hydrographic watershed. The model was modified in order to be more suitable for a karstified mountainous catchment: it now includes the possibility to punctually add external sources, and the temperature-index approach for estimating melt was adjusted to include the influence of solar radiation. No parameter calibration in a classical sense was used as sufficient observations are missing. Hence, parameters are estimated with values obtained from the literature, catchment boundaries were determined basing on tracer experiments, as well as the relationship between precipitation, spring- and river-discharge. Historical data such as river discharge, infiltration experiments and snow and glacier mass balance measurements were used to validate simulations. Here some case studies are presented, illustrating the difficulty of estimating snowmelt and icemelt parameters, of judging their correctness, as well as the consequent sensitivity of the regional water balance. REFERENCES Kumar, M. 2009: Toward a hydrologic modeling system. PhD Thesis, Departement of civil and Environmental engineering, Pennsylvania State University, USA.
The Modular Modeling System (MMS): A toolbox for water- and environmental-resources management
Leavesley, G.H.; Markstrom, S.L.; Viger, R.J.; Hay, L.E.; ,
2005-01-01
The increasing complexity of water- and environmental-resource problems require modeling approaches that incorporate knowledge from a broad range of scientific and software disciplines. To address this need, the U.S. Geological Survey (USGS) has developed the Modular Modeling System (MMS). MMS is an integrated system of computer software for model development, integration, and application. Its modular design allows a high level of flexibility and adaptability to enable modelers to incorporate their own software into a rich array of built-in models and modeling tools. These include individual process models, tightly coupled models, loosely coupled models, and fully- integrated decision support systems. A geographic information system (GIS) interface, the USGS GIS Weasel, has been integrated with MMS to enable spatial delineation and characterization of basin and ecosystem features, and to provide objective parameter-estimation methods for models using available digital data. MMS provides optimization and sensitivity-analysis tools to analyze model parameters and evaluate the extent to which uncertainty in model parameters affects uncertainty in simulation results. MMS has been coupled with the Bureau of Reclamation object-oriented reservoir and river-system modeling framework, RiverWare, to develop models to evaluate and apply optimal resource-allocation and management strategies to complex, operational decisions on multipurpose reservoir systems and watersheds. This decision support system approach has been developed, tested, and implemented in the Gunnison, Yakima, San Joaquin, Rio Grande, and Truckee River basins of the western United States. MMS is currently being coupled with the U.S. Forest Service model SIMulating Patterns and Processes at Landscape Scales (SIMPPLLE) to assess the effects of alternative vegetation-management strategies on a variety of hydrological and ecological responses. Initial development and testing of the MMS-SIMPPLLE integration is being conducted on the Colorado Plateau region of the western United Sates.
Methodology of automated ionosphere front velocity estimation for ground-based augmentation of GNSS
NASA Astrophysics Data System (ADS)
Bang, Eugene; Lee, Jiyun
2013-11-01
ionospheric anomalies occurring during severe ionospheric storms can pose integrity threats to Global Navigation Satellite System (GNSS) Ground-Based Augmentation Systems (GBAS). Ionospheric anomaly threat models for each region of operation need to be developed to analyze the potential impact of these anomalies on GBAS users and develop mitigation strategies. Along with the magnitude of ionospheric gradients, the speed of the ionosphere "fronts" in which these gradients are embedded is an important parameter for simulation-based GBAS integrity analysis. This paper presents a methodology for automated ionosphere front velocity estimation which will be used to analyze a vast amount of ionospheric data, build ionospheric anomaly threat models for different regions, and monitor ionospheric anomalies continuously going forward. This procedure automatically selects stations that show a similar trend of ionospheric delays, computes the orientation of detected fronts using a three-station-based trigonometric method, and estimates speeds for the front using a two-station-based method. It also includes fine-tuning methods to improve the estimation to be robust against faulty measurements and modeling errors. It demonstrates the performance of the algorithm by comparing the results of automated speed estimation to those manually computed previously. All speed estimates from the automated algorithm fall within error bars of ± 30% of the manually computed speeds. In addition, this algorithm is used to populate the current threat space with newly generated threat points. A larger number of velocity estimates helps us to better understand the behavior of ionospheric gradients under geomagnetic storm conditions.
2015-07-14
2008). Sequential Monte Carlo smoothing with applica- tion to parameter estimation in non-linear state space models. Bernoulli , 14, 155-179. [22] Parikh...1BcΣ(θ?,δ)(Θ) ] = o ( τk ) for all k ∈ N. (45) The other integral is over the ball BΣ(θ?, δ), i.e. close to θ?; hence we perform a Taylor expansion of...1] R3 (θ, θ?) = ∑ |α|=4 ∂αϕ (θ? + cθ (θ − θ?)) (θ − θ?)α α! . 26 We now use the symmetry of the normal distribution N ( θ?, τ2Σ ) on the ball BΣ(θ
NASA Astrophysics Data System (ADS)
Stoppe, N.; Horn, R.
2017-01-01
A basic understanding of soil behavior on the mesoscale resp. macroscale (i.e. soil aggregates resp. bulk soil) requires knowledge of the processes at the microscale (i.e. particle scale), therefore rheological investigations of natural soils receive growing attention. In the present research homogenized and sieved (< 2 mm) samples from Marshland soils of the riparian zone of the River Elbe (North Germany) were analyzed with a modular compact rheometer MCR 300 (Anton Paar, Ostfildern, Germany) with a profiled parallel-plate measuring system. Amplitude sweep tests (AST) with controlled shear deformation were conducted to investigate the viscoelastic properties of the studied soils under oszillatory stress. The gradual depletion of microstructural stiffness during AST cannot only be characterized by the well-known rheological parameters G, G″ and tan δ but also by the dimensionless area parameter integral z, which quantifies the elasticity of microstructure. To discover the physicochemical parameters, which influences the microstructural stiffness, statistical tests were used taking the combined effects of these parameters into account. Although the influence of the individual factors varies depending on soil texture, the physicochemical features significantly affecting soil micro structure were identified. Based on the determined statistical relationships between rheological and physicochemical parameters, pedotransfer functions (PTF) have been developed, which allow a mathematical estimation of the rheological target value integral z. Thus, stabilizing factors are: soil organic matter, concentration of Ca2+, content of CaCO3 and pedogenic iron oxides; whereas the concentration of Na+ and water content represent structurally unfavorable factors.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Bled, Florent; Belant, Jerrold L; Van Daele, Lawrence J; Svoboda, Nathan; Gustine, David; Hilderbrand, Grant; Barnes, Victor G
2017-11-01
Current management of large carnivores is informed using a variety of parameters, methods, and metrics; however, these data are typically considered independently. Sharing information among data types based on the underlying ecological, and recognizing observation biases, can improve estimation of individual and global parameters. We present a general integrated population model (IPM), specifically designed for brown bears ( Ursus arctos ), using three common data types for bear ( U . spp.) populations: repeated counts, capture-mark-recapture, and litter size. We considered factors affecting ecological and observation processes for these data. We assessed the practicality of this approach on a simulated population and compared estimates from our model to values used for simulation and results from count data only. We then present a practical application of this general approach adapted to the constraints of a case study using historical data available for brown bears on Kodiak Island, Alaska, USA. The IPM provided more accurate and precise estimates than models accounting for repeated count data only, with credible intervals including the true population 94% and 5% of the time, respectively. For the Kodiak population, we estimated annual average litter size (within one year after birth) to vary between 0.45 [95% credible interval: 0.43; 0.55] and 1.59 [1.55; 1.82]. We detected a positive relationship between salmon availability and adult survival, with survival probabilities greater for females than males. Survival probabilities increased from cubs to yearlings to dependent young ≥2 years old and decreased with litter size. Linking multiple information sources based on ecological and observation mechanisms can provide more accurate and precise estimates, to better inform management. IPMs can also reduce data collection efforts by sharing information among agencies and management units. Our approach responds to an increasing need in bear populations' management and can be readily adapted to other large carnivores.
Bled, Florent; Belant, Jerrold L.; Van Daele, Lawrence J.; Svoboda, Nathan; Gustine, David D.; Hilderbrand, Grant V.; Barnes, Victor G.
2017-01-01
Current management of large carnivores is informed using a variety of parameters, methods, and metrics; however, these data are typically considered independently. Sharing information among data types based on the underlying ecological, and recognizing observation biases, can improve estimation of individual and global parameters. We present a general integrated population model (IPM), specifically designed for brown bears (Ursus arctos), using three common data types for bear (U. spp.) populations: repeated counts, capture–mark–recapture, and litter size. We considered factors affecting ecological and observation processes for these data. We assessed the practicality of this approach on a simulated population and compared estimates from our model to values used for simulation and results from count data only. We then present a practical application of this general approach adapted to the constraints of a case study using historical data available for brown bears on Kodiak Island, Alaska, USA. The IPM provided more accurate and precise estimates than models accounting for repeated count data only, with credible intervals including the true population 94% and 5% of the time, respectively. For the Kodiak population, we estimated annual average litter size (within one year after birth) to vary between 0.45 [95% credible interval: 0.43; 0.55] and 1.59 [1.55; 1.82]. We detected a positive relationship between salmon availability and adult survival, with survival probabilities greater for females than males. Survival probabilities increased from cubs to yearlings to dependent young ≥2 years old and decreased with litter size. Linking multiple information sources based on ecological and observation mechanisms can provide more accurate and precise estimates, to better inform management. IPMs can also reduce data collection efforts by sharing information among agencies and management units. Our approach responds to an increasing need in bear populations’ management and can be readily adapted to other large carnivores.
A Method for Precision Closed-Loop Irrigation Using a Modified PID Control Algorithm
NASA Astrophysics Data System (ADS)
Goodchild, Martin; Kühn, Karl; Jenkins, Malcolm; Burek, Kazimierz; Dutton, Andrew
2016-04-01
The benefits of closed-loop irrigation control have been demonstrated in grower trials which show the potential for improved crop yields and resource usage. Managing water use by controlling irrigation in response to soil moisture changes to meet crop water demands is a popular approach but requires knowledge of closed-loop control practice. In theory, to obtain precise closed-loop control of a system it is necessary to characterise every component in the control loop to derive the appropriate controller parameters, i.e. proportional, integral & derivative (PID) parameters in a classic PID controller. In practice this is often difficult to achieve. Empirical methods are employed to estimate the PID parameters by observing how the system performs under open-loop conditions. In this paper we present a modified PID controller, with a constrained integral function, that delivers excellent regulation of soil moisture by supplying the appropriate amount of water to meet the needs of the plant during the diurnal cycle. Furthermore, the modified PID controller responds quickly to changes in environmental conditions, including rainfall events which can result in: controller windup, under-watering and plant stress conditions. The experimental work successfully demonstrates the functionality of a constrained integral PID controller that delivers robust and precise irrigation control. Coir substrate strawberry growing trial data is also presented illustrating soil moisture control and the ability to match water deliver to solar radiation.
Integrated identification and control for nanosatellites reclaiming failed satellite
NASA Astrophysics Data System (ADS)
Han, Nan; Luo, Jianjun; Ma, Weihua; Yuan, Jianping
2018-05-01
Using nanosatellites to reclaim a failed satellite needs nanosatellites to attach to its surface to take over its attitude control function. This is challenging, since parameters including the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites with respect to the given body-fixed frame of the failed satellite are all unknown after the attachment. Besides, if the total control capacity needs to be increased during the reclaiming process by new nanosatellites, real-time parameters updating will be necessary. For these reasons, an integrated identification and control method is proposed in this paper, which enables the real-time parameters identification and attitude takeover control to be conducted concurrently. Identification of the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites are both considered. To guarantee sufficient excitation for the identification of the inertia matrix, a modified identification equation is established by filtering out sample points leading to ill-conditioned identification, and the identification performance of the inertia matrix is improved. Based on the real-time estimated inertia matrix, an attitude takeover controller is designed, the stability of the controller is analysed using Lyapunov method. The commanded control torques are allocated to each nanosatellite while the control saturation constraint being satisfied using the Quadratic Programming (QP) method. Numerical simulations are carried out to demonstrate the feasibility and effectiveness of the proposed integrated identification and control method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Pathak, Shriram M; Ruff, Aaron; Kostewicz, Edmund S; Patel, Nikunjkumar; Turner, David B; Jamei, Masoud
2017-12-04
Mechanistic modeling of in vitro data generated from metabolic enzyme systems (viz., liver microsomes, hepatocytes, rCYP enzymes, etc.) facilitates in vitro-in vivo extrapolation (IVIV_E) of metabolic clearance which plays a key role in the successful prediction of clearance in vivo within physiologically-based pharmacokinetic (PBPK) modeling. A similar concept can be applied to solubility and dissolution experiments whereby mechanistic modeling can be used to estimate intrinsic parameters required for mechanistic oral absorption simulation in vivo. However, this approach has not widely been applied within an integrated workflow. We present a stepwise modeling approach where relevant biopharmaceutics parameters for ketoconazole (KTZ) are determined and/or confirmed from the modeling of in vitro experiments before being directly used within a PBPK model. Modeling was applied to various in vitro experiments, namely: (a) aqueous solubility profiles to determine intrinsic solubility, salt limiting solubility factors and to verify pK a ; (b) biorelevant solubility measurements to estimate bile-micelle partition coefficients; (c) fasted state simulated gastric fluid (FaSSGF) dissolution for formulation disintegration profiling; and (d) transfer experiments to estimate supersaturation and precipitation parameters. These parameters were then used within a PBPK model to predict the dissolved and total (i.e., including the precipitated fraction) concentrations of KTZ in the duodenum of a virtual population and compared against observed clinical data. The developed model well characterized the intraluminal dissolution, supersaturation, and precipitation behavior of KTZ. The mean simulated AUC 0-t of the total and dissolved concentrations of KTZ were comparable to (within 2-fold of) the corresponding observed profile. Moreover, the developed PBPK model of KTZ successfully described the impact of supersaturation and precipitation on the systemic plasma concentration profiles of KTZ for 200, 300, and 400 mg doses. These results demonstrate that IVIV_E applied to biopharmaceutical experiments can be used to understand and build confidence in the quality of the input parameters and mechanistic models used for mechanistic oral absorption simulations in vivo, thereby improving the prediction performance of PBPK models. Moreover, this approach can inform the selection and design of in vitro experiments, potentially eliminating redundant experiments and thus helping to reduce the cost and time of drug product development.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina
2016-10-21
In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.
Multivariate analysis of ATR-FTIR spectra for assessment of oil shale organic geochemical properties
Washburn, Kathryn E.; Birdwell, Justin E.
2013-01-01
In this study, attenuated total reflectance (ATR) Fourier transform infrared spectroscopy (FTIR) was coupled with partial least squares regression (PLSR) analysis to relate spectral data to parameters from total organic carbon (TOC) analysis and programmed pyrolysis to assess the feasibility of developing predictive models to estimate important organic geochemical parameters. The advantage of ATR-FTIR over traditional analytical methods is that source rocks can be analyzed in the laboratory or field in seconds, facilitating more rapid and thorough screening than would be possible using other tools. ATR-FTIR spectra, TOC concentrations and Rock–Eval parameters were measured for a set of oil shales from deposits around the world and several pyrolyzed oil shale samples. PLSR models were developed to predict the measured geochemical parameters from infrared spectra. Application of the resulting models to a set of test spectra excluded from the training set generated accurate predictions of TOC and most Rock–Eval parameters. The critical region of the infrared spectrum for assessing S1, S2, Hydrogen Index and TOC consisted of aliphatic organic moieties (2800–3000 cm−1) and the models generated a better correlation with measured values of TOC and S2 than did integrated aliphatic peak areas. The results suggest that combining ATR-FTIR with PLSR is a reliable approach for estimating useful geochemical parameters of oil shales that is faster and requires less sample preparation than current screening methods.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective
NASA Astrophysics Data System (ADS)
Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.
2018-05-01
Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Chaopeng; Fang, Kuai; Ludwig, Noel
The DOE and BLM identified 285,000 acres of desert land in the Chuckwalla valley in the western U.S., for solar energy development. In addition to several approved solar projects, a pumped storage project was recently proposed to pump nearly 8000 acre-ft-yr of groundwater to store and stabilize solar energy output. This study aims at providing estimates of the amount of naturally-occurring recharge, and to estimate the impact of the pumping on the water table. To better provide the locations and intensity of natural recharge, this study employs an integrated, physically-based hydrologic model, PAWS+CLM, to calculate recharge. Then, the simulated rechargemore » is used in a parameter estimation package to calibrate spatially-distributed K field. This design is to incorporate all available observational data, including soil moisture monitoring stations, groundwater head, and estimates of groundwater conductivity, to constrain the modeling. To address the uncertainty of the soil parameters, an ensemble of simulations are conducted, and the resulting recharges are either rejected or accepted based on calibrated groundwater head and local variation of the K field. The results indicate that the natural total inflow to the study domain is between 7107 and 12,772 afy. During the initial-fill phase of pumped storage project, the total outflow exceeds the upper bound estimate of the inflow. If the initial-fill is annualized to 20 years, the average pumping is more than the lower bound of inflows. The results indicate after adding the pumped storage project, the system will nearing, if not exceeding, its maximum renewable pumping capacity. The accepted recharges lead to a drawdown range of 24 to 45 ft for an assumed specific yield of 0.05. However, the drawdown is sensitive to this parameter, whereas there is insufficient data to adequately constrain this parameter.« less
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.
NASA Astrophysics Data System (ADS)
Peters-Lidard, C. D.; Kumar, S. V.; Santanello, J. A.; Tian, Y.; Rodell, M.; Mocko, D.; Reichle, R.
2008-12-01
The Land Information System (LIS; http://lis.gsfc.nasa.gov; Kumar et al., 2006; Peters-Lidard et al., 2007) is a flexible land surface modeling framework that has been developed with the goal of integrating satellite- and ground-based observational data products and advanced land surface modeling techniques to produce optimal fields of land surface states and fluxes. The LIS software was the co-winner of NASA's 2005 Software of the Year award. LIS facilitates the integration of observations from Earth-observing systems and predictions and forecasts from Earth System and Earth science models into the decision-making processes of partnering agency and national organizations. Due to its flexible software design, LIS can serve both as a Problem Solving Environment (PSE) for hydrologic research to enable accurate global water and energy cycle predictions, and as a Decision Support System (DSS) to generate useful information for application areas including disaster management, water resources management, agricultural management, numerical weather prediction, air quality and military mobility assessment. LIS has evolved from two earlier efforts - North American Land Data Assimilation System (NLDAS; Mitchell et al. 2004) and Global Land Data Assimilation System (GLDAS; Rodell et al. 2004) that focused primarily on improving numerical weather prediction skills by improving the characterization of the land surface conditions. Both of these systems, now use specific configurations of the LIS software in their current implementations. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through 'plugins'. In addition to these capabilities, LIS has also been demonstrated for parameter estimation (Peters-Lidard et al., 2008; Santanello et al., 2007) and data assimilation (Kumar et al., 2008). Examples and case studies demonstrating the capabilities and impacts of LIS for hydrometeorological modeling, land data assimilation and parameter estimation will be presented.
Rapid prototyping of soil moisture estimates using the NASA Land Information System
NASA Astrophysics Data System (ADS)
Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.
2007-12-01
The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.
Alomari, Ali Hamed; Wille, Marie-Luise; Langton, Christian M
2018-02-01
Conventional mechanical testing is the 'gold standard' for assessing the stiffness (N mm -1 ) and strength (MPa) of bone, although it is not applicable in-vivo since it is inherently invasive and destructive. The mechanical integrity of a bone is determined by its quantity and quality; being related primarily to bone density and structure respectively. Several non-destructive, non-invasive, in-vivo techniques have been developed and clinically implemented to estimate bone density, both areal (dual-energy X-ray absorptiometry (DXA)) and volumetric (quantitative computed tomography (QCT)). Quantitative ultrasound (QUS) parameters of velocity and attenuation are dependent upon both bone quantity and bone quality, although it has not been possible to date to transpose one particular QUS parameter into separate estimates of quantity and quality. It has recently been shown that ultrasound transit time spectroscopy (UTTS) may provide an accurate estimate of bone density and hence quantity. We hypothesised that UTTS also has the potential to provide an estimate of bone structure and hence quality. In this in-vitro study, 16 human femoral bone samples were tested utilising three techniques; UTTS, micro computed tomography (μCT), and mechanical testing. UTTS was utilised to estimate bone volume fraction (BV/TV) and two novel structural parameters, inter-quartile range of the derived transit time (UTTS-IQR) and the transit time of maximum proportion of sonic-rays (TTMP). μCT was utilised to derive BV/TV along with several bone structure parameters. A destructive mechanical test was utilised to measure the stiffness and strength (failure load) of the bone samples. BV/TV was calculated from the derived transit time spectrum (TTS); the correlation coefficient (R 2 ) with μCT-BV/TV was 0.885. For predicting mechanical stiffness and strength, BV/TV derived by both μCT and UTTS provided the strongest correlation with mechanical stiffness (R 2 =0.567 and 0.618 respectively) and mechanical strength (R 2 =0.747 and 0.736 respectively). When respective structural parameters were incorporated to BV/TV, multiple regression analysis indicated that none of the μCT histomorphometric parameters could improve the prediction of mechanical stiffness and strength, while for UTTS, adding TTMP to BV/TV increased the prediction of mechanical stiffness to R 2 =0.711 and strength to R 2 =0.827. It is therefore envisaged that UTTS may have the ability to estimate BV/TV along with providing an improved prediction of osteoporotic fracture risk, within routine clinical practice in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Ma, Xiaopeng; Li, Yanlai; Wu, Haiyang; Cui, Chenyu; Zhang, Xiaoming; Zhang, Hao; Yao, Jun
Hydraulic fracturing is an important measure for the development of tight reservoirs. In order to describe the distribution of hydraulic fractures, micro-seismic diagnostic was introduced into petroleum fields. Micro-seismic events may reveal important information about static characteristics of hydraulic fracturing. However, this method is limited to reflect the distribution area of the hydraulic fractures and fails to provide specific parameters. Therefore, micro-seismic technology is integrated with history matching to predict the hydraulic fracture parameters in this paper. Micro-seismic source location is used to describe the basic shape of hydraulic fractures. After that, secondary modeling is considered to calibrate the parameters information of hydraulic fractures by using DFM (discrete fracture model) and history matching method. In consideration of fractal feature of hydraulic fracture, fractal fracture network model is established to evaluate this method in numerical experiment. The results clearly show the effectiveness of the proposed approach to estimate the parameters of hydraulic fractures.
Rosinska, M; Gwiazda, P; De Angelis, D; Presanis, A M
2016-04-01
HIV spread in men who have sex with men (MSM) is an increasing problem in Poland. Despite the existence of a surveillance system, there is no direct evidence to allow estimation of HIV prevalence and the proportion undiagnosed in MSM. We extracted data on HIV and the MSM population in Poland, including case-based surveillance data, diagnostic testing prevalence data and behavioural data relating to self-reported prior diagnosis, stratified by age (⩽35, >35 years) and region (Mazowieckie including the capital city of Warsaw; other regions). They were integrated into one model based on a Bayesian evidence synthesis approach. The posterior distributions for HIV prevalence and the undiagnosed fraction were estimated by Markov Chain Monte Carlo methods. To improve the model fit we repeated the analysis, introducing bias parameters to account for potential lack of representativeness in data. By placing additional constraints on bias parameters we obtained precisely identified estimates. This family of models indicates a high undiagnosed fraction [68·3%, 95% credibility interval (CrI) 53·9-76·1] and overall low prevalence (2·3%, 95% CrI 1·4-4·1) of HIV in MSM. Additional data are necessary in order to produce more robust epidemiological estimates. More effort is urgently needed to ensure timely diagnosis of HIV in Poland.
NASA Astrophysics Data System (ADS)
Kazeykina, Anna; Muñoz, Claudio
2018-04-01
We continue our study on the Cauchy problem for the two-dimensional Novikov-Veselov (NV) equation, integrable via the inverse scattering transform for the two dimensional Schrödinger operator at a fixed energy parameter. This work is concerned with the more involved case of a positive energy parameter. For the solution of the linearized equation we derive smoothing and Strichartz estimates by combining new estimates for two different frequency regimes, extending our previous results for the negative energy case [18]. The low frequency regime, which our previous result was not able to treat, is studied in detail. At non-low frequencies we also derive improved smoothing estimates with gain of almost one derivative. Then we combine the linear estimates with a Fourier decomposition method and Xs,b spaces to obtain local well-posedness of NV at positive energy in Hs, s > 1/2. Our result implies, in particular, that at least for s > 1/2, NV does not change its behavior from semilinear to quasilinear as energy changes sign, in contrast to the closely related Kadomtsev-Petviashvili equations. As a complement to our LWP results, we also provide some new explicit solutions of NV at zero energy, generalizations of the lumps solutions, which exhibit new and nonstandard long time behavior. In particular, these solutions blow up in infinite time in L2.
A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.
Kim, Joo H; Roberts, Dustyn
2015-09-01
Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.
Characterization of a high-transmissivity zone by well test analysis: Steady state case
Tiedeman, Claire; Hsieh, Paul A.; Christian, Sarah B.
1995-01-01
A method is developed to analyze steady horizontal flow to a well pumped from a confined aquifer composed of two homogeneous zones with contrasting transmissivities. Zone 1 is laterally unbounded and encloses zone 2, which is elliptical in shape and is several orders of magnitude more transmissive than zone 1. The solution for head is obtained by the boundary integral equation method. Nonlinear least squares regression is used to estimate the model parameters, which include the transmissivity of zone 1, and the location, size, and orientation of zone 2. The method is applied to a hypothetical aquifer where zone 2 is a long and narrow zone of vertical fractures. Synthetic data are generated from three different well patterns, representing different areal coverage and proximity to the fracture zone. When zone 1 of the hypothetical aquifer is homogeneous, the method correctly estimates all model parameters. When zone 1 is a randomly heterogeneous transmissivity field, some parameter estimates, especially the length of zone 2, become highly uncertain. To reduce uncertainty, the pumped well should be close to the fracture zone, and surrounding observation wells should cover an area similar in dimension to the length of the fracture zone. Some prior knowledge of the fracture zone, such as that gained from a surface geophysical survey, would greatly aid in designing the well test.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Exact Bayesian Inference for Phylogenetic Birth-Death Models.
Parag, K V; Pybus, O G
2018-04-26
Inferring the rates of change of a population from a reconstructed phylogeny of genetic sequences is a central problem in macro-evolutionary biology, epidemiology, and many other disciplines. A popular solution involves estimating the parameters of a birth-death process (BDP), which links the shape of the phylogeny to its birth and death rates. Modern BDP estimators rely on random Markov chain Monte Carlo (MCMC) sampling to infer these rates. Such methods, while powerful and scalable, cannot be guaranteed to converge, leading to results that may be hard to replicate or difficult to validate. We present a conceptually and computationally different parametric BDP inference approach using flexible and easy to implement Snyder filter (SF) algorithms. This method is deterministic so its results are provable, guaranteed, and reproducible. We validate the SF on constant rate BDPs and find that it solves BDP likelihoods known to produce robust estimates. We then examine more complex BDPs with time-varying rates. Our estimates compare well with a recently developed parametric MCMC inference method. Lastly, we performmodel selection on an empirical Agamid species phylogeny, obtaining results consistent with the literature. The SF makes no approximations, beyond those required for parameter quantisation and numerical integration, and directly computes the posterior distribution of model parameters. It is a promising alternative inference algorithm that may serve either as a standalone Bayesian estimator or as a useful diagnostic reference for validating more involved MCMC strategies. The Snyder filter is implemented in Matlab and the time-varying BDP models are simulated in R. The source code and data are freely available at https://github.com/kpzoo/snyder-birth-death-code. kris.parag@zoo.ox.ac.uk. Supplementary material is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Timmermans, Joris; Gomez-Dans, Jose; Lewis, Philip; Loew, Alexander; Schlenz, Florian
2017-04-01
The large amount of remote sensing data nowadays available provides a huge potential for monitoring crop development, drought conditions and water efficiency. This potential however not been realized yet because algorithms for land surface parameter retrieval mostly use data from only a single sensor. Consequently products that combine different low-level observations from different sensors are hard to find. The lack of synergistic retrieval is caused because it is easier to focus on single sensor types/footprints and temporal observation times, than to find a way to compensate for differences. Different sensor types (microwave/optical) require different radiative transfer (RT) models and also require consistency between the models to have any impact on the retrieval of soil moisture by a microwave instrument. Varying spatial footprints require first proper collocation of the data before one can scale between different resolutions. Considering these problems, merging optical and microwave observations have not been performed yet. The goal of this research was to investigate the potential of integrating optical and microwave RT models within the Earth Observation Land Data Assimilation System (EOLDAS) synergistically to derive biophysical parameters. This system uses a Bayesian data assimilation approach together with observation operators such as the PROSAIL model to estimate land surface parameters. For the purpose of enabling the system to integrate passive microwave radiation (from an ELBARRA II passive microwave radiometer), the Community Microwave Emission Model (CMEM) RT-model, was integrated within the EOLDAS system. In order to quantify the potential, a variety of land surface parameters was chosen to be retrieved from the system, in particular variables that a) impact only optical RT (such as leaf water content and leaf dry matter), b) only impact the microwave RT (such as soil moisture and soil temperature), and c) Leaf Area Index (LAI) that impacts both optical and microwave RT. The results show a high potential when both optical and microwave are used independently. Using only RapidEye only with SAIL RT model, LAI was estimated with R=0.68 with p=0.09, although estimating leaf water content and dry matter showed lower correlations |R|<0.4. The results for retrieving soil temperature and leaf area index retrievals using only (passive microwave) Elbarra-II observations were good with respectively R=[0.85, 0.79], P=[0.0, 0.0], when focusing on dry-spells (of at least 9 days) only the results respectively [R=0.73, and P=0.0], and R=0.89 and R=0.77 for respectively the trend and anomalies. Synergistically using optical and microwave shows also a good potential. This scenario shows that absolute errors improved (with RMSE=1.22 and S=0.89), but with degrading correlations (R=0.59 and P=0.04); the sparse optical observations only improved part of the temporal domain. However in general the synergistic retrieval showed good potential; microwave data provides better information concerning the overall trend of the retrieved LAI due to the regular acquisitions, while optical data provides better information concerning the absolute values of the LAI.
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J. Andrew
2010-01-01
We develop a hierarchical capture–recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture–recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture–recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew
2010-11-01
We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Integration of manatee life-history data and population modeling
Eberhardt, L.L.; O'Shea, Thomas J.; O'Shea, Thomas J.; Ackerman, B.B.; Percival, H. Franklin
1995-01-01
Aerial counts and the number of deaths have been a major focus of attention in attempts to understand the population status of the Florida manatee (Trichechus manatus latirostris). Uncertainties associated with these data have made interpretation difficult. However, knowledge of manatee life-history attributes increased and now permits the development of a population model. We describe a provisional model based on the classical approach of Lotka. Parameters in the model are based on data from'other papers in this volume and draw primarily on observations from the Crystal River, Blue Spring, and Adantic Coast areas. The model estimates X (the finite rate ofincrease) at each study area, and application ofthe delta method provides estimates of variance components and partial derivatives ofX with respectto key input parameters (reproduction, adult survival, and early survival). In some study areas, only approximations of some parameters are available. Estimates of X and coefficients of variation (in parentheses) of manatees were 1.07 (0.009) in the Crystal River, 1.06 (0.012) at Blue Spring, and 1.01 (0.012) on the Atlantic Coast. Changing adult survival has a major effect on X. Early-age survival has the smallest effect. Bootstrap comparisons of population growth estimates from trend counts in the Crystal River and at Blue Spring and the reproduction and survival data suggest that the higher, observed rates from counts are probably not due to chance. Bootstrapping for variance estimates based on reproduction and survival data from manatees at Blue Spring and in the Crystal River provided estimates of X, adult survival, and rates of reproduction that were similar to those obtained by other methods. Our estimates are preliminary and suggestimprovements for future data collection and analysis. However, results support efforts to reduce mortality as the most effective means to promote the increased growth necessary for the eventual recovery of the Florida manatee population.
Gaussian model for emission rate measurement of heated plumes using hyperspectral data
NASA Astrophysics Data System (ADS)
Grauer, Samuel J.; Conrad, Bradley M.; Miguel, Rodrigo B.; Daun, Kyle J.
2018-02-01
This paper presents a novel model for measuring the emission rate of a heated gas plume using hyperspectral data from an FTIR imaging spectrometer. The radiative transfer equation (RTE) is used to relate the spectral intensity of a pixel to presumed Gaussian distributions of volume fraction and temperature within the plume, along a line-of-sight that corresponds to the pixel, whereas previous techniques exclusively presume uniform distributions for these parameters. Estimates of volume fraction and temperature are converted to a column density by integrating the local molecular density along each path. Image correlation velocimetry is then employed on raw spectral intensity images to estimate the volume-weighted normal velocity at each pixel. Finally, integrating the product of velocity and column density along a control surface yields an estimate of the instantaneous emission rate. For validation, emission rate estimates were derived from synthetic hyperspectral images of a heated methane plume, generated using data from a large-eddy simulation. Calculating the RTE with Gaussian distributions of volume fraction and temperature, instead of uniform distributions, improved the accuracy of column density measurement by 14%. Moreover, the mean methane emission rate measured using our approach was within 4% of the ground truth. These results support the use of Gaussian distributions of thermodynamic properties in calculation of the RTE for optical gas diagnostics.
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Crystallisation kinetics study in stabilisation treatment of sol-gel derived 45S5 bioglass
NASA Astrophysics Data System (ADS)
Prakrathi, S.; Matin, Mallikarjun; Kiran, P.; Manne, Bhaskar; Ramesh, M. R.
2018-04-01
Solgel gel derived bioglasses require stabilisation heat treatment to decompose nitrates and to improve mechanical stability. While decomposing nitrate phases especially in solgel derived 45S5 bioglass, it is difficult to avoid crystallisation of silicate crystalline phases (Na2CaSi2O6, Na2Ca2Si3O9) due to overlapping of nitrates decomposition and silicates crystallisation temperatures. Control of such crystallinity amount in bioglasses is at most important during stabilisation as it affects the dissolution rates of bioglassesin body fluids. Controlling and quantifying of this crystallinity helps in engineering bioglasses for specific period in application. In this work, synthesis of 45S5 bioglass through solgel method is presented. Here, temperature and time dependent crystallisation kinetics were estimated using a quality parameter derived from X-ray diffraction (XRD) patterns of bioglass during stabilisation treatment. Quality parameter derived from XRD patterns is termed as IPB which is the ratio of integral area of peaks to the integral area of background. It is proposed that IPB can be used as quality parameter to assess crystallinity and to study crystallisation kinetics in bioglasses.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-11-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.
Bounded diffusion impedance characterization of battery electrodes using fractional modeling
NASA Astrophysics Data System (ADS)
Gabano, Jean-Denis; Poinot, Thierry; Huard, Benoît
2017-06-01
This article deals with the ability of fractional modeling to describe the bounded diffusion behavior encountered in modern thin film and nanoparticles lithium battery electrodes. Indeed, the diffusion impedance of such batteries behaves as a half order integrator characterized by the Warburg impedance at high frequencies and becomes a classical integrator described by a capacitor at low frequencies. The transition between these two behaviors depends on the particles geometry. Three of them will be considered in this paper: planar, cylindrical and spherical ones. The fractional representation proposed is a gray box model able to perfectly fit the low and high frequency diffusive impedance behaviors while optimizing the frequency response transition. Identification results are provided using frequential simulation data considering the three electrochemical diffusion models based on the particles geometry. Furthermore, knowing this geometry allows to estimate the diffusion ionic resistance and time constant using the relationships linking these physical parameters to the structural fractional model parameters. Finally, other simulations using Randles impedance models including the charge transfer impedance and the external resistance demonstrate the interest of fractional modeling in order to identify properly not only the charge transfer impedance but also the diffusion physical parameters whatever the particles geometry.
NASA Astrophysics Data System (ADS)
Boden, A. F.; Lane, B. F.; Creech-Eakman, M. J.; Queloz, D.; Koresko, C. D.
2000-05-01
The Palomar Testbed Interferometer (PTI) is a long-baseline near-infrared interferometer located at Palomar Observatory. For the past several years we have had an ongoing program of resolving and reconstructing the visual and physical orbits of spectroscopic binary stars with PTI, with the goal of obtaining precise dynamical mass estimates and other physical parameters. We will present a number of new visual and physical orbit determinations derived from integrated reductions of PTI visibility and archival and new spectroscopic radial velocity data. The systems for which we will discuss our orbit models are: iota Pegasi (HD 210027), 64 Psc (HD 4676), 12 Boo (HD 123999), 75 Cnc (HD 78418), 47 And (HD 8374), HD 205539, BY Draconis (HDE 234677), and 3 Boo (HD 120064), and 3 Boo (HD 120064). All of these systems are double-lined binary systems (SB2), and integrated astrometric/radial velocity orbit modeling provides precise fundamental parameters (mass, luminosity) and system distance determinations comparable with Hipparcos precisions.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Population demographics, survival, and reporduction: Alaska sea otter research
Monson, Daniel H.; Bodkin, James L.; Doak, D.F.; Estes, James A.; Tinker, M.T.; Siniff, D.B.; Maldini, Daniela; Calkins, Donald; Atkinson, Shannon; Meehan, Rosa
2004-01-01
The fundamental force behind population change is the balance between age-specific survival and reproductive rates. Thus, understanding population demographics is crucial when trying to interpret trends in population change over time. For many species, demographic rates change as the population’s status (i.e., relative to prey resources) varies. Indices of body condition indicative of individual energy reserves can be a useful gauge of population status. Integrated studies designed to measure (1) population trends; (2) current population status; and (3) demographic rates will provide the most complete picture of the factors driving observed population changes. In particular, estimates of age specific survival and reproduction in conjunction with measures of population change can be integrated into population matrix models useful in explaining observed trends. We focus here on the methods used to measure demographic rates in sea otters, and note the importance of comparable methods between studies. Next, we review the current knowledge of the influence of population status on demographic parameters. We end with examples of the power of matrix modeling as a tool to integrate various types of demographic information for detecting otherwise hard to detect changes in demographic parameters.
Penalized differential pathway analysis of integrative oncogenomics studies.
van Wieringen, Wessel N; van de Wiel, Mark A
2014-04-01
Through integration of genomic data from multiple sources, we may obtain a more accurate and complete picture of the molecular mechanisms underlying tumorigenesis. We discuss the integration of DNA copy number and mRNA gene expression data from an observational integrative genomics study involving cancer patients. The two molecular levels involved are linked through the central dogma of molecular biology. DNA copy number aberrations abound in the cancer cell. Here we investigate how these aberrations affect gene expression levels within a pathway using observational integrative genomics data of cancer patients. In particular, we aim to identify differential edges between regulatory networks of two groups involving these molecular levels. Motivated by the rate equations, the regulatory mechanism between DNA copy number aberrations and gene expression levels within a pathway is modeled by a simultaneous-equations model, for the one- and two-group case. The latter facilitates the identification of differential interactions between the two groups. Model parameters are estimated by penalized least squares using the lasso (L1) penalty to obtain a sparse pathway topology. Simulations show that the inclusion of DNA copy number data benefits the discovery of gene-gene interactions. In addition, the simulations reveal that cis-effects tend to be over-estimated in a univariate (single gene) analysis. In the application to real data from integrative oncogenomic studies we show that inclusion of prior information on the regulatory network architecture benefits the reproducibility of all edges. Furthermore, analyses of the TP53 and TGFb signaling pathways between ER+ and ER- samples from an integrative genomics breast cancer study identify reproducible differential regulatory patterns that corroborate with existing literature.
Rapid impact testing for quantitative assessment of large populations of bridges
NASA Astrophysics Data System (ADS)
Zhou, Yun; Prader, John; DeVitis, John; Deal, Adrienne; Zhang, Jian; Moon, Franklin; Aktan, A. Emin
2011-04-01
Although the widely acknowledged shortcomings of visual inspection have fueled significant advances in the areas of non-destructive evaluation and structural health monitoring (SHM) over the last several decades, the actual practice of bridge assessment has remained largely unchanged. The authors believe the lack of adoption, especially of SHM technologies, is related to the 'single structure' scenarios that drive most research. To overcome this, the authors have developed a concept for a rapid single-input, multiple-output (SIMO) impact testing device that will be capable of capturing modal parameters and estimating flexibility/deflection basins of common highway bridges during routine inspections. The device is composed of a trailer-mounted impact source (capable of delivering a 50 kip impact) and retractable sensor arms, and will be controlled by an automated data acquisition, processing and modal parameter estimation software. The research presented in this paper covers (a) the theoretical basis for SISO, SIMO and MIMO impact testing to estimate flexibility, (b) proof of concept numerical studies using a finite element model, and (c) a pilot implementation on an operating highway bridge. Results indicate that the proposed approach can estimate modal flexibility within a few percent of static flexibility; however, the estimated modal flexibility matrix is only reliable for the substructures associated with the various SIMO tests. To overcome this shortcoming, a modal 'stitching' approach for substructure integration to estimate the full Eigen vector matrix is developed, and preliminary results of these methods are also presented.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
NASA Astrophysics Data System (ADS)
Nikolaeva, E. A.; Bikmaev, I. F.; Shimansky, V. V.; Sakhibullin, N. A.
2017-06-01
We investigate parameters of two high-mass X-ray binary systems IGR J17544-2619 and IGR J21343+4738 discovered by INTEGRAL space observatory by using optical data of Russian Turkish Telescope (RTT-150). Long-term optical observations of X-ray binary systems IGR J17544-2619 and IGR J21343+4738 were carried out in 2007-2015. Based on the RTT-150 data we estimated orbital periods of these systems. We have modeled the profiles of line HeI 6678 Å in spectra of studied optical stars and obtain the parameters of the star's atmosphere.
New selection effect in statistical investigations of supernova remnants
NASA Astrophysics Data System (ADS)
Allakhverdiev, A. O.; Guseinov, O. Kh.; Kasumov, F. K.
1986-01-01
The influence of H II regions on the parameters of supernova remnants (SNR) is investigated. It has been shown that the projection of such regions on the SNRs leads to: a) local changes of morphological structure of young shell-type SNRs and b) considerable distortions of integral parameters of evolved shell-type SNRs (with D > 10 pc) and plerions, up to their complete undetectability on the background of classical and gigantic H II regions. A new selection effect, in fact, arises from these factors connected with additional limitations made by the real structure of the interstellar medium on the statistical investigations of SNRs. The influence of this effect on the statistical completeness of objects has been estimated.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
NASA Astrophysics Data System (ADS)
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
NASA Astrophysics Data System (ADS)
Levshenko, V. T.; Grigoryan, A. G.
2018-03-01
By the examples of the Roslavl'skii, Grafskii, and Platava-Varvarinskii faults, the possibility is demonstrated of mapping the geological objects by the measurement algorithm that includes successively measuring the spectra of microseisms at the points of the measurement network by movable instruments and statistical accumulation of the ratios of the power spectra of the amplitudes. Based on this technique, the positions of these seismically active faults are determined by the integrated profile observations of the parameters of microseismic and radon fields. The refined positions of the faults can be used in estimating the seismic impacts on the critical objects in the vicinity of these faults.
Optimal accelerometer placement on a robot arm for pose estimation
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.
2017-05-01
The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.
NASA Technical Reports Server (NTRS)
1979-01-01
A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
NASA Astrophysics Data System (ADS)
Lee, Michael; Freed, Adrian; Wessel, David
1992-08-01
In this report we present our tools for prototyping adaptive user interfaces in the context of real-time musical instrument control. Characteristic of most human communication is the simultaneous use of classified events and estimated parameters. We have integrated a neural network object into the MAX language to explore adaptive user interfaces that considers these facets of human communication. By placing the neural processing in the context of a flexible real-time musical programming environment, we can rapidly prototype experiments on applications of adaptive interfaces and learning systems to musical problems. We have trained networks to recognize gestures from a Mathews radio baton, Nintendo Power GloveTM, and MIDI keyboard gestural input devices. In one experiment, a network successfully extracted classification and attribute data from gestural contours transduced by a continuous space controller, suggesting their application in the interpretation of conducting gestures and musical instrument control. We discuss network architectures, low-level features extracted for the networks to operate on, training methods, and musical applications of adaptive techniques.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yeates, E.; Dreaper, G.; Afshari, S.; Tavakoly, A. A.
2017-12-01
Over the past six fiscal years, the United States Army Corps of Engineers (USACE) has contracted an average of about a billion dollars per year for navigation channel dredging. To execute these funds effectively, USACE Districts must determine which navigation channels need to be dredged in a given year. Improving this prioritization process results in more efficient waterway maintenance. This study uses the Streamflow Prediction Tool, a runoff routing model based on global weather forecast ensembles, to estimate dredged volumes. This study establishes regional linear relationships between cumulative flow and dredged volumes over a long-term simulation covering 30 years (1985-2015), using drainage area and shoaling parameters. The study framework integrates the National Hydrography Dataset (NHDPlus Dataset) with parameters from the Corps Shoaling Analysis Tool (CSAT) and dredging record data from USACE District records. Results in the test cases of the Houston Ship Channel and the Sabine and Port Arthur Harbor waterways in Texas indicate positive correlation between the simulated streamflows and actual dredging records.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockhold, Mark L.; Zhang, Z. F.; Meyer, Philip D.
2015-02-28
Current plans for treatment and disposal of immobilized low-activity waste (ILAW) from Hanford’s underground waste storage tanks include vitrification and storage of the glass waste form in a nearsurface disposal facility. This Integrated Disposal Facility (IDF) is located in the 200 East Area of the Hanford Central Plateau. Performance assessment (PA) of the IDF requires numerical modeling of subsurface flow and reactive transport processes over very long periods (thousands of years). The models used to predict facility performance require parameters describing various physical, hydraulic, and transport properties. This report provides updated estimates of physical, hydraulic, and transport properties and parametersmore » for both near- and far-field materials, intended for use in future IDF PA modeling efforts. Previous work on physical and hydraulic property characterization for earlier IDF PA analyses is reviewed and summarized. For near-field materials, portions of this document and parameter estimates are taken from an earlier data package. For far-field materials, a critical review is provided of methodologies used in previous data packages. Alternative methods are described and associated parameters are provided.« less
An optimal state estimation model of sensory integration in human postural balance
NASA Astrophysics Data System (ADS)
Kuo, Arthur D.
2005-09-01
We propose a model for human postural balance, combining state feedback control with optimal state estimation. State estimation uses an internal model of body and sensor dynamics to process sensor information and determine body orientation. Three sensory modalities are modeled: joint proprioception, vestibular organs in the inner ear, and vision. These are mated with a two degree-of-freedom model of body dynamics in the sagittal plane. Linear quadratic optimal control is used to design state feedback and estimation gains. Nine free parameters define the control objective and the signal-to-noise ratios of the sensors. The model predicts statistical properties of human sway in terms of covariance of ankle and hip motion. These predictions are compared with normal human responses to alterations in sensory conditions. With a single parameter set, the model successfully reproduces the general nature of postural motion as a function of sensory environment. Parameter variations reveal that the model is highly robust under normal sensory conditions, but not when two or more sensors are inaccurate. This behavior is similar to that of normal human subjects. We propose that age-related sensory changes may be modeled with decreased signal-to-noise ratios, and compare the model's behavior with degraded sensors against experimental measurements from older adults. We also examine removal of the model's vestibular sense, which leads to instability similar to that observed in bilateral vestibular loss subjects. The model may be useful for predicting which sensors are most critical for balance, and how much they can deteriorate before posture becomes unstable.
Modeling nonlinear responses of DOC transport in boreal catchments in Sweden
NASA Astrophysics Data System (ADS)
Kasurinen, Ville; Alfredsen, Knut; Ojala, Anne; Pumpanen, Jukka; Weyhenmeyer, Gesa A.; Futter, Martyn N.; Laudon, Hjalmar; Berninger, Frank
2016-07-01
Stream water dissolved organic carbon (DOC) concentrations display high spatial and temporal variation in boreal catchments. Understanding and predicting these patterns is a challenge with great implications for water quality projections and carbon balance estimates. Although several biogeochemical models have been used to estimate stream water DOC dynamics, model biases common during both rain and snow melt-driven events. The parsimonious DOC-model, K-DOC, with 10 calibrated parameters, uses a nonlinear discharge and catchment water storage relationship including soil temperature dependencies of DOC release and consumption. K-DOC was used to estimate the stream water DOC concentrations over 5 years for eighteen nested boreal catchments having total area of 68 km2 (varying from 0.04 to 67.9 km2). The model successfully simulated DOC concentrations during base flow conditions, as well as, hydrological events in catchments dominated by organic and mineral soils reaching NSEs from 0.46 to 0.76. Our semimechanistic model was parsimonious enough to have all parameters estimated using statistical methods. We did not find any clear differences between forest and mire-dominated catchments that could be explained by soil type or tree species composition. However, parameters controlling slow release and consumption of DOC from soil water behaved differently for small headwater catchments (less than 2 km2) than for those that integrate larger areas of different ecosystem types (10-68 km2). Our results emphasize that it is important to account for nonlinear dependencies of both, soil temperature, and catchment water storage, when simulating DOC dynamics of boreal catchments.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
A model for estimating the impact of changes in children's vaccines.
Simpson, K N; Biddle, A K; Rabinovich, N R
1995-12-01
To assist in strategic planning for the improvement of vaccines and vaccine programs, an economic model was developed and tested that estimates the potential impact of vaccine innovations on health outcomes and costs associated with vaccination and illness. A multistep, iterative process of data extraction/integration was used to develop the model and the scenarios. Parameter replication, sensitivity analysis, and expert review were used to validate the model. The greatest impact on the improvement of health is expected to result from the production of less reactogenic vaccines that require fewer inoculations for immunity. The greatest economic impact is predicted from improvements that decrease the number of inoculations required. Scenario analysis may be useful for integrating health outcomes and economic data into decision making. For childhood infections, this analysis indicates that large cost savings can be achieved in the future if we can improve vaccine efficacy so that the number of required inoculations is reduced. Such an improvement represents a large potential "payback" for the United States and might benefit other countries.
Chabiniok, Radomir; Wang, Vicky Y; Hadjicharalambous, Myrianthi; Asner, Liya; Lee, Jack; Sermesant, Maxime; Kuhl, Ellen; Young, Alistair A; Moireau, Philippe; Nash, Martyn P; Chapelle, Dominique; Nordsletten, David A
2016-04-06
With heart and cardiovascular diseases continually challenging healthcare systems worldwide, translating basic research on cardiac (patho)physiology into clinical care is essential. Exacerbating this already extensive challenge is the complexity of the heart, relying on its hierarchical structure and function to maintain cardiovascular flow. Computational modelling has been proposed and actively pursued as a tool for accelerating research and translation. Allowing exploration of the relationships between physics, multiscale mechanisms and function, computational modelling provides a platform for improving our understanding of the heart. Further integration of experimental and clinical data through data assimilation and parameter estimation techniques is bringing computational models closer to use in routine clinical practice. This article reviews developments in computational cardiac modelling and how their integration with medical imaging data is providing new pathways for translational cardiac modelling.
Kamoi, Shun; Pretty, Christopher; Balmer, Joel; Davidson, Shaun; Pironet, Antoine; Desaive, Thomas; Shaw, Geoffrey M; Chase, J Geoffrey
2017-04-24
Pressure contour analysis is commonly used to estimate cardiac performance for patients suffering from cardiovascular dysfunction in the intensive care unit. However, the existing techniques for continuous estimation of stroke volume (SV) from pressure measurement can be unreliable during hemodynamic instability, which is inevitable for patients requiring significant treatment. For this reason, pressure contour methods must be improved to capture changes in vascular properties and thus provide accurate conversion from pressure to flow. This paper presents a novel pressure contour method utilizing pulse wave velocity (PWV) measurement to capture vascular properties. A three-element Windkessel model combined with the reservoir-wave concept are used to decompose the pressure contour into components related to storage and flow. The model parameters are identified beat-to-beat from the water-hammer equation using measured PWV, wave component of the pressure, and an estimate of subject-specific aortic dimension. SV is then calculated by converting pressure to flow using identified model parameters. The accuracy of this novel method is investigated using data from porcine experiments (N = 4 Pietrain pigs, 20-24.5 kg), where hemodynamic properties were significantly altered using dobutamine, fluid administration, and mechanical ventilation. In the experiment, left ventricular volume was measured using admittance catheter, and aortic pressure waveforms were measured at two locations, the aortic arch and abdominal aorta. Bland-Altman analysis comparing gold-standard SV measured by the admittance catheter and estimated SV from the novel method showed average limits of agreement of ±26% across significant hemodynamic alterations. This result shows the method is capable of estimating clinically acceptable absolute SV values according to Critchely and Critchely. The novel pressure contour method presented can accurately estimate and track SV even when hemodynamic properties are significantly altered. Integrating PWV measurements into pressure contour analysis improves identification of beat-to-beat changes in Windkessel model parameters, and thus, provides accurate estimate of blood flow from measured pressure contour. The method has great potential for overcoming weaknesses associated with current pressure contour methods for estimating SV.
NASA Astrophysics Data System (ADS)
Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.
2013-09-01
This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.
Cam, E.; Sauer, J.R.; Nichols, J.D.; Hines, J.E.; Flather, C.H.
2000-01-01
Species richness of local communities is a state variable commonly used in community ecology and conservation biology. Investigation of spatial and temporal variations in richness and identification of factors associated with these variations form a basis for specifying management plans, evaluating these plans, and for testing hypotheses of theoretical interest. However, estimation of species richness is not trivial: species can be missed by investigators during sampling sessions. Sampling artifacts can lead to erroneous conclusions on spatial and temporal variation in species richness. Here we use data from the North American Breeding Bird Survey to estimate parameters describing the state of bird communities in the Mid-Atlantic Assessment (MAIA) region: species richness, extinction probability, turnover and relative species richness. We use a recently developed approach to estimation of species richness and related parameters that does not require the assumption that all the species are detected during sampling efforts. The information presented here is intended to visualize the state of bird communities in the MAIA region. We provide information on 1975 and 1990. We also quantified the changes between these years. We summarized and mapped the community attributes at a scale of management interest (watershed units).
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Aquifer response to stream-stage and recharge variations. II. Convolution method and applications
Barlow, P.M.; DeSimone, L.A.; Moench, A.F.
2000-01-01
In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.Analytical step-response functions, developed for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank seepage rates and bank storage.
Multinuclear NMR of CaSiO(3) glass: simulation from first-principles.
Pedone, Alfonso; Charpentier, Thibault; Menziani, Maria Cristina
2010-06-21
An integrated computational method which couples classical molecular dynamics simulations with density functional theory calculations is used to simulate the solid-state NMR spectra of amorphous CaSiO(3). Two CaSiO(3) glass models are obtained by shell-model molecular dynamics simulations, successively relaxed at the GGA-PBE level of theory. The calculation of the NMR parameters (chemical shielding and quadrupolar parameters), which are then used to simulate solid-state 1D and 2D-NMR spectra of silicon-29, oxygen-17 and calcium-43, is achieved by the gauge including projector augmented-wave (GIPAW) and the projector augmented-wave (PAW) methods. It is shown that the limitations due to the finite size of the MD models can be overcome using a Kernel Estimation Density (KDE) approach to simulate the spectra since it better accounts for the disorder effects on the NMR parameter distribution. KDE allows reconstructing a smoothed NMR parameter distribution from the MD/GIPAW data. Simulated NMR spectra calculated with the present approach are found to be in excellent agreement with the experimental data. This further validates the CaSiO(3) structural model obtained by MD simulations allowing the inference of relationships between structural data and NMR response. The methods used to simulate 1D and 2D-NMR spectra from MD GIPAW data have been integrated in a package (called fpNMR) freely available on request.
Müller, Erich A; Jackson, George
2014-01-01
A description of fluid systems with molecular-based algebraic equations of state (EoSs) and by direct molecular simulation is common practice in chemical engineering and the physical sciences, but the two approaches are rarely closely coupled. The key for an integrated representation is through a well-defined force field and Hamiltonian at the molecular level. In developing coarse-grained intermolecular potential functions for the fluid state, one typically starts with a detailed, bottom-up quantum-mechanical or atomic-level description and then integrates out the unwanted degrees of freedom using a variety of techniques; an iterative heuristic simulation procedure is then used to refine the parameters of the model. By contrast, with a top-down technique, one can use an accurate EoS to link the macroscopic properties of the fluid and the force-field parameters. We discuss the latest developments in a top-down representation of fluids, with a particular focus on a group-contribution formulation of the statistical associating fluid theory (SAFT-γ). The accurate SAFT-γ EoS is used to estimate the parameters of the Mie force field, which can then be used with confidence in direct molecular simulations to obtain thermodynamic, structural, interfacial, and dynamical properties that are otherwise inaccessible from the EoS. This is exemplified for several prototypical fluids and mixtures, including carbon dioxide, hydrocarbons, perfluorohydrocarbons, and aqueous surfactants.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
de Villiers, J. S.; Pirjola, R. J.; Cilliers, P. J.
2016-09-01
This research focuses on the inversion of geomagnetic variation field measurements to obtain the source currents in the ionosphere and magnetosphere, and to determine the geoelectric fields at the Earth's surface. During geomagnetic storms, the geoelectric fields create geomagnetically induced currents (GIC) in power networks. These GIC may disturb the operation of power systems, cause damage to power transformers, and even result in power blackouts. In this model, line currents running east-west along given latitudes are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground being composed of a zero magnetic east component and a nonzero electric east component. The line current parameters are estimated by inverting Fourier integrals (over wavenumber) of elementary geomagnetic fields using the Levenberg-Marquardt technique. The output parameters of the model are the ionospheric current strength and the geoelectric east component at the Earth's surface. A conductivity profile of the Earth is adapted from a shallow layered-Earth model for one observatory, together with a deep-layer model derived from satellite observations. This profile is used to obtain the ground surface impedance and therefore the reflection coefficient in the integrals. The inputs for the model are a spectrum of the geomagnetic data for 31 May 2013. The output parameters of the model are spectrums of the ionospheric current strength and of the surface geoelectric field. The inverse Fourier transforms of these spectra provide the time variations on the same day. The geoelectric field data can be used as a proxy for GIC in the prediction of GIC for power utilities. The current strength data can assist in the interpretation of upstream solar wind behaviour.
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Hardiansyah, Deni; Attarwala, Ali Asgar; Kletting, Peter; Mottaghy, Felix M; Glatting, Gerhard
2017-10-01
To investigate the accuracy of predicted time-integrated activity coefficients (TIACs) in peptide-receptor radionuclide therapy (PRRT) using simulated dynamic PET data and a physiologically based pharmacokinetic (PBPK) model. PBPK parameters were estimated using biokinetic data of 15 patients after injection of (152±15)MBq of 111 In-DTPAOC (total peptide amount (5.78±0.25)nmol). True mathematical phantoms of patients (MPPs) were the PBPK model with the estimated parameters. Dynamic PET measurements were simulated as being done after bolus injection of 150MBq 68 Ga-DOTATATE using the true MPPs. Dynamic PET scans around 35min p.i. (P 1 ), 4h p.i. (P 2 ) and the combination of P 1 and P 2 (P 3 ) were simulated. Each measurement was simulated with four frames of 5min each and 2 bed positions. PBPK parameters were fitted to the PET data to derive the PET-predicted MPPs. Therapy was simulated assuming an infusion of 5.1GBq of 90 Y-DOTATATE over 30min in both true and PET-predicted MPPs. TIACs of simulated therapy were calculated, true MPPs (true TIACs) and predicted MPPs (predicted TIACs) followed by the calculation of variabilities v. For P 1 and P 2 the population variabilities of kidneys, liver and spleen were acceptable (v<10%). For the tumours and the remainders, the values were large (up to 25%). For P 3 , population variabilities for all organs including the remainder further improved, except that of the tumour (v>10%). Treatment planning of PRRT based on dynamic PET data seems possible for the kidneys, liver and spleen using a PBPK model and patient specific information. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
He, M.; Hogue, T. S.; Franz, K.; Margulis, S. A.; Vrugt, J. A.
2009-12-01
The National Weather Service (NWS), the agency responsible for short- and long-term streamflow predictions across the nation, primarily applies the SNOW17 model for operational forecasting of snow accumulation and melt. The SNOW17-forecasted snowmelt serves as an input to a rainfall-runoff model for streamflow forecasts in snow-dominated areas. The accuracy of streamflow predictions in these areas largely relies on the accuracy of snowmelt. However, no direct snowmelt measurements are available to validate the SNOW17 predictions. Instead, indirect measurements such as snow water equivalent (SWE) measurements or discharge are typically used to calibrate SNOW17 parameters. In addition, the forecast practice is inherently deterministic, lacking tools to systematically address forecasting uncertainties (e.g., uncertainties in parameters, forcing, SWE and discharge observations, etc.). The current research presents an Integrated Uncertainty analysis and Ensemble-based data Assimilation (IUEA) framework to improve predictions of snowmelt and discharge while simultaneously providing meaningful estimates of the associated uncertainty. The IUEA approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. The robustness and usefulness of the IUEA-SNOW17 framework is evaluated for snow-dominated watersheds in the northern Sierra Mountains, using the coupled IUEA-SNOW17 and an operational soil moisture accounting model (SAC-SMA). Preliminary results are promising and indicate successful performance of the coupled IUEA-SNOW17 framework. Implementation of the SNOW17 with the IUEA is straightforward and requires no major modification to the SNOW17 model structure. The IUEA-SNOW17 framework is intended to be modular and transferable and should assist the NWS in advancing the current forecasting system and reinforcing current operational forecasting skill.
NASA Astrophysics Data System (ADS)
Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.
2012-12-01
Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and low flow categories.
NASA Astrophysics Data System (ADS)
Van der Auweraer, H.; Steinbichler, H.; Vanlanduit, S.; Haberstok, C.; Freymann, R.; Storer, D.; Linet, V.
2002-04-01
Accurate structural models are key to the optimization of the vibro-acoustic behaviour of panel-like structures. However, at the frequencies of relevance to the acoustic problem, the structural modes are very complex, requiring high-spatial-resolution measurements. The present paper discusses a vibration testing system based on pulsed-laser holographic electronic speckle pattern interferometry (ESPI) measurements. It is a characteristic of the method that time-triggered (and not time-averaged) vibration images are obtained. Its integration into a practicable modal testing and analysis procedure is reviewed. The accumulation of results at multiple excitation frequencies allows one to build up frequency response functions. A novel parameter extraction approach using spline-based data reduction and maximum-likelihood parameter estimation was developed. Specific extensions have been added in view of the industrial application of the approach. These include the integration of geometry and response information, the integration of multiple views into one single model, the integration with finite-element model data and the prior identification of the critical panels and critical modes. A global procedure was hence established. The approach has been applied to several industrial case studies, including car panels, the firewall of a monovolume car, a full vehicle, panels of a light truck and a household product. The research was conducted in the context of the EUREKA project HOLOMODAL and the Brite-Euram project SALOME.
NASA Astrophysics Data System (ADS)
Alfaro-Cuello, M.; Torres-Flores, S.; Carrasco, E. R.; Mendes de Oliveira, C.; de Mello, D. F.; Amram, P.
2015-10-01
We present a study of the kinematics and the physical properties of the central region of the Hickson Compact Group 31 (HCG 31), focusing on the HCG 31A+C system, using integral field spectroscopy data taken with the Gemini South Telescope. The main players in the merging event (galaxies A and C) are two dwarf galaxies, which have had one close encounter, given the observed tidal tails, and may now be in their second approach, and are possibly about to merge. We present new velocity fields and Hα emission, stellar continuum, velocity dispersion, electron density, Hα equivalent-width and age maps. Considering the high spatial resolution of the integral field unit data, we were able to measure various components and estimate their physical parameters, spatially resolving the different structures in this region. Our main findings are the following: (1) We report for the first time the presence of a super stellar cluster next to the burst associated with the HCG 31C central blob, related to the high values of velocity dispersion observed in this region as well as to the highest value of stellar continuum emission. This may suggest that this system is cleaning its environment through strong stellar winds that may then trigger a strong star formation event in its neighbourhood. (2) Among other physical parameters, we estimate L(Hα) ˜ 14 × 1041 erg s-1 and the star formation rate, SFR ˜11 M⊙ yr-1 for the central merging region of HCG 31A+C. These values indicate a high star formation density, suggesting that the system is part of a merging object, supporting previous scenarios proposed for this system.
Sleep Disruption Medical Intervention Forecasting (SDMIF) Module for the Integrated Medical Model
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Brooker, John; Mallis, Melissa; Hursh, Steve; Caldwell, Lynn; Myers, Jerry
2011-01-01
The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fatigue due to sleep disruption is a condition that could lead to operational errors, potentially resulting in loss of mission or crew. Pharmacological consumables are mitigation strategies used to manage the risks associated with sleep deficits. The likelihood of medical intervention due to sleep disruption was estimated with a well validated sleep model and a Monte Carlo computer simulation in an effort to optimize the quantity of consumables. METHODS: The key components of the model are the mission parameter program, the calculation of sleep intensity and the diagnosis and decision module. The mission parameter program was used to create simulated daily sleep/wake schedules for an ISS increment. The hypothetical schedules included critical events such as dockings and extravehicular activities and included actual sleep time and sleep quality. The schedules were used as inputs to the Sleep, Activity, Fatigue and Task Effectiveness (SAFTE) Model (IBR Inc., Baltimore MD), which calculated sleep intensity. Sleep data from an ISS study was used to relate calculated sleep intensity to the probability of sleep medication use, using a generalized linear model for binomial regression. A human yes/no decision process using a binomial random number was also factored into sleep medication use probability. RESULTS: These probability calculations were repeated 5000 times resulting in an estimate of the most likely amount of sleep aids used during an ISS mission and a 95% confidence interval. CONCLUSIONS: These results were transferred to the parent IMM for further weighting and integration with other medical conditions, to help inform operational decisions. This model is a potential planning tool for ensuring adequate sleep during sleep disrupted periods of a mission.
On the choice of the functional form of the aftershocks decay equation
NASA Astrophysics Data System (ADS)
Gasperini, P.; Lolli, B.
2003-04-01
To infer the optimal form of the rate equation describing the decay of aftershock sequences, we analyzed the correlation among parameter estimates made for New Zealand (Eberhard-Phillips, 1998) and Italy (Lolli and Gasperini, 2003), for the simple model proposed by Reasenberg and Jones (1989) λ(t)=10a+b(Mm-Mmin)over(t+c)^p} We found significant correlations between the sequence productivity parameter a and all other ones (p, c and b) and between p and c. At odd with previous findings (Guo and Ogata, 1995; 1997) we did not find instead correlation between b and p. We verified that the explicit inclusion in the formula of the time decay normalization integral removes the correlation of a with both parameters p and c. We also found, separately for both regions, that assuming the linear coefficient of main shock magnitude M_m about 2/3b makes parameter a also independent of b. The a parameter of the resulting rate equation λ(t)= 10a+2/3bMm-bMmin/ (t+c)^pint_ST(t+c)-pdt being almost independent of the other ones, can be reliably considered the expressions of a peculiar property of the seismogenic process. Thus we can infer that the new equation could be more appropriate than the previous one to predict sequence behavior in different areas. This formulation has also been applied to more sophisticated models of the epidemic type (ETAS), letting the coefficient of the main shock magnitude to vary freely. Some preliminary experiments give estimates close to 1/2b for this parameter.
TREXMO: A Translation Tool to Support the Use of Regulatory Occupational Exposure Models.
Savic, Nenad; Racordon, Dimitri; Buchs, Didier; Gasic, Bojan; Vernez, David
2016-10-01
Occupational exposure models vary significantly in their complexity, purpose, and the level of expertise required from the user. Different parameters in the same model may lead to different exposure estimates for the same exposure situation. This paper presents a tool developed to deal with this concern-TREXMO or TRanslation of EXposure MOdels. TREXMO integrates six commonly used occupational exposure models, namely, ART v.1.5, STOFFENMANAGER(®) v.5.1, ECETOC TRA v.3, MEASE v.1.02.01, EMKG-EXPO-TOOL, and EASE v.2.0. By enabling a semi-automatic translation between the parameters of these six models, TREXMO facilitates their simultaneous use. For a given exposure situation, defined by a set of parameters in one of the models, TREXMO provides the user with the most appropriate parameters to use in the other exposure models. Results showed that, once an exposure situation and parameters were set in ART, TREXMO reduced the number of possible outcomes in the other models by 1-4 orders of magnitude. The tool should manage to reduce the uncertain entry or selection of parameters in the six models, improve between-user reliability, and reduce the time required for running several models for a given exposure situation. In addition to these advantages, registrants of chemicals and authorities should benefit from more reliable exposure estimates for the risk characterization of dangerous chemicals under Regulation, Evaluation, Authorisation and restriction of CHemicals (REACH). © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
NASA Astrophysics Data System (ADS)
Sirirojvisuth, Apinut
In complex aerospace system design, making an effective design decision requires multidisciplinary knowledge from both product and process perspectives. Integrating manufacturing considerations into the design process is most valuable during the early design stages since designers have more freedom to integrate new ideas when changes are relatively inexpensive in terms of time and effort. Several metrics related to manufacturability are cost, time, and manufacturing readiness level (MRL). Yet, there is a lack of structured methodology that quantifies how changes in the design decisions impact these metrics. As a result, a new set of integrated cost analysis tools are proposed in this study to quantify the impacts. Equally important is the capability to integrate this new cost tool into the existing design methodologies without sacrificing agility and flexibility required during the early design phases. To demonstrate the applicability of this concept, a ModelCenter environment is used to develop software architecture that represents Integrated Product and Process Development (IPPD) methodology used in several aerospace systems designs. The environment seamlessly integrates product and process analysis tools and makes effective transition from one design phase to the other while retaining knowledge gained a priori. Then, an advanced cost estimating tool called Hybrid Lifecycle Cost Estimating Tool (HLCET), a hybrid combination of weight-, process-, and activity-based estimating techniques, is integrated with the design framework. A new weight-based lifecycle cost model is created based on Tailored Cost Model (TCM) equations [3]. This lifecycle cost tool estimates the program cost based on vehicle component weights and programmatic assumptions. Additional high fidelity cost tools like process-based and activity-based cost analysis methods can be used to modify the baseline TCM result as more knowledge is accumulated over design iterations. Therefore, with this concept, the additional manufacturing knowledge can be used to identify a more accurate lifecycle cost and facilitate higher fidelity tradeoffs during conceptual and preliminary design. Advanced Composite Cost Estimating Model (ACCEM) is employed as a process-based cost component to replace the original TCM result of the composite part production cost. The reason for the replacement is that TCM estimates production costs from part weights as a result of subtractive manufacturing of metallic origin such as casting, forging, and machining processes. A complexity factor can sometimes be adjusted to reflect different types of metal and machine settings. The TCM assumption, however, gives erroneous results when applied to additive processes like those of composite manufacturing. Another innovative aspect of this research is the introduction of a work measurement technique called Maynard Operation Sequence Technique (MOST) to be used, similarly to Activity-Based Costing (ABC) approach, to estimate manufacturing time of a part by virtue of breaking down the operations occurred during its production. ABC allows a realistic determination of cost incurred in each activity, as opposed to using a traditional method of time estimation by analogy or using response surface equations from historical process data. The MOST concept provides a tailored study of an individual process typically required for a new, innovative design. Nevertheless, the MOST idea has some challenges, one of which is its requirement to build a new process from ground up. The process development requires a Subject Matter Expertise (SME) in manufacturing method of the particular design. The SME must have also a comprehensive understanding of the MOST system so that the correct parameters are chosen. In practice, these knowledge requirements may demand people from outside of the design discipline and a priori training of MOST. To relieve the constraint, this study includes an entirely new sub-system architecture that comprises 1) a knowledge-based system to provide the required knowledge during the process selection; and 2) a new user-interface to guide the parameter selection when building the process using MOST. Also included in this study is the demonstration of how the HLCET and its constituents can be integrated with a Georgia Tech' Integrated Product and Process Development (IPPD) methodology. The applicability of this work will be shown through a complex aerospace design example to gain insights into how manufacturing knowledge helps make better design decisions during the early stages. The setup process is explained with an example of its utility demonstrated in a hypothetical fighter aircraft wing redesign. The evaluation of the system effectiveness against existing methodologies is illustrated to conclude the thesis.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Hu, L.; Zhang, Z.G.; Mouraux, A.; Iannetti, G.D.
2015-01-01
Transient sensory, motor or cognitive event elicit not only phase-locked event-related potentials (ERPs) in the ongoing electroencephalogram (EEG), but also induce non-phase-locked modulations of ongoing EEG oscillations. These modulations can be detected when single-trial waveforms are analysed in the time-frequency domain, and consist in stimulus-induced decreases (event-related desynchronization, ERD) or increases (event-related synchronization, ERS) of synchrony in the activity of the underlying neuronal populations. ERD and ERS reflect changes in the parameters that control oscillations in neuronal networks and, depending on the frequency at which they occur, represent neuronal mechanisms involved in cortical activation, inhibition and binding. ERD and ERS are commonly estimated by averaging the time-frequency decomposition of single trials. However, their trial-to-trial variability that can reflect physiologically-important information is lost by across-trial averaging. Here, we aim to (1) develop novel approaches to explore single-trial parameters (including latency, frequency and magnitude) of ERP/ERD/ERS; (2) disclose the relationship between estimated single-trial parameters and other experimental factors (e.g., perceived intensity). We found that (1) stimulus-elicited ERP/ERD/ERS can be correctly separated using principal component analysis (PCA) decomposition with Varimax rotation on the single-trial time-frequency distributions; (2) time-frequency multiple linear regression with dispersion term (TF-MLRd) enhances the signal-to-noise ratio of ERP/ERD/ERS in single trials, and provides an unbiased estimation of their latency, frequency, and magnitude at single-trial level; (3) these estimates can be meaningfully correlated with each other and with other experimental factors at single-trial level (e.g., perceived stimulus intensity and ERP magnitude). The methods described in this article allow exploring fully non-phase-locked stimulus-induced cortical oscillations, obtaining single-trial estimate of response latency, frequency, and magnitude. This permits within-subject statistical comparisons, correlation with pre-stimulus features, and integration of simultaneously-recorded EEG and fMRI. PMID:25665966
Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints
NASA Astrophysics Data System (ADS)
Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.
2018-05-01
Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
NASA Astrophysics Data System (ADS)
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Overview of SDCM - The Spacecraft Design and Cost Model
NASA Technical Reports Server (NTRS)
Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.
1988-01-01
The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.
1985-10-01
can monitor a larger region and provide a larger database with fewer moorings, and its averaging (integrating) process can filter out undesirable small...as the eikonal equation, relating o to the perturbed sound-speed field Z+6c and the flow field v during ,*.. a transmission by .= (c*-v VO) 2 /(F+6c...should consult Spiesberger et al. (1980) for ray identifications. Ugincius (1970) solved the eikonal equation using the method of -. characteristics
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
A new approach for the estimation of phytoplankton cell counts associated with algal blooms.
Nazeer, Majid; Wong, Man Sing; Nichol, Janet Elizabeth
2017-07-15
This study proposes a method for estimating phytoplankton cell counts associated with an algal bloom, using satellite images coincident with in situ and meteorological parameters. Satellite images from Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and HJ-1 A/B Charge Couple Device (CCD) sensors were integrated with the meteorological observations to provide an estimate of phytoplankton cell counts. All images were atmospherically corrected using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmospheric correction method with a possible error of 1.2%, 2.6%, 1.4% and 2.3% for blue (450-520nm), green (520-600nm), red (630-690nm) and near infrared (NIR 760-900nm) wavelengths, respectively. Results showed that the developed Artificial Neural Network (ANN) model yields a correlation coefficient (R) of 0.95 with the in situ validation data with Sum of Squared Error (SSE) of 0.34cell/ml, Mean Relative Error (MRE) of 0.154cells/ml and a bias of -504.87. The integration of the meteorological parameters with remote sensing observations provided a promising estimation of the algal scum as compared to previous studies. The applicability of the ANN model was tested over Hong Kong as well as over Lake Kasumigaura, Japan and Lake Okeechobee, Florida USA, where algal blooms were also reported. Further, a 40-year (1975-2014) red tide occurrence map was developed and revealed that the eastern and southern waters of Hong Kong are more vulnerable to red tides. Over the 40 years, 66% of red tide incidents were associated with the Dinoflagellates group, while the remainder were associated with the Diatom group (14%) and several other minor groups (20%). The developed technology can be applied to other similar environments in an efficient and cost-saving manner. Copyright © 2017 Elsevier B.V. All rights reserved.
Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations
NASA Astrophysics Data System (ADS)
Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.
2016-12-01
The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.