NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-06-01
One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.
Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua
2018-05-01
This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
NASA Astrophysics Data System (ADS)
Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan
2014-12-01
Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Estimation of end point foot clearance points from inertial sensor data.
Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu
2011-01-01
Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
NASA Astrophysics Data System (ADS)
Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato
2017-07-01
An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca
2011-11-28
This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.
2016-01-01
There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.
Nam, Kanghyun
2015-11-11
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.
Finite-size analysis of continuous-variable measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Xueying; Zhang, Yichen; Zhao, Yijia; Wang, Xiangyu; Yu, Song; Guo, Hong
2017-10-01
We study the impact of the finite-size effect on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocol, mainly considering the finite-size effect on the parameter estimation procedure. The central-limit theorem and maximum likelihood estimation theorem are used to estimate the parameters. We also analyze the relationship between the number of exchanged signals and the optimal modulation variance in the protocol. It is proved that when Charlie's position is close to Bob, the CV-MDI QKD protocol has the farthest transmission distance in the finite-size scenario. Finally, we discuss the impact of finite-size effects related to the practical detection in the CV-MDI QKD protocol. The overall results indicate that the finite-size effect has a great influence on the secret-key rate of the CV-MDI QKD protocol and should not be ignored.
1980-06-01
70. AWST RC 7 Coeittu an rewwase ati of nee*aa.ean mimDdentify by black n,.mboJ T two-sample version of the Cram~ r -von Mines statistic for right...estimator for exponential distributions. KEY WORDS: Cram~ r -von Mtses distance; Kaplan-Meier estimators; Right censorship; Scale parameter; lodgea and...suppose that two positive random variables ’i 2 S0 and ’ r differ in distribution only by their scale parameters. That is, there exists a positive
USDA-ARS?s Scientific Manuscript database
Photosynthetic potential in C3 plants is largely limited by CO2 diffusion through stomata (Ls) and mesophyll (Lm) and photo-biochemical (Lb) processes. Accurate estimation of mesophyll conductance (gm) using gas exchange (GE) and chlorophyll fluorescence (CF) parameters of the photosynthetic proces...
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
[Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].
Lu, Yu; Li, Xiang-Ru; Yang, Tan
2014-11-01
It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex, [Fe/H]: 0.104 0 dex.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
Key Parameters for Operator Diagnosis of BWR Plant Condition during a Severe Accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Dwight A.; Poore, III, Willis P.
2015-01-01
The objective of this research is to examine the key information needed from nuclear power plant instrumentation to guide severe accident management and mitigation for boiling water reactor (BWR) designs (specifically, a BWR/4-Mark I), estimate environmental conditions that the instrumentation will experience during a severe accident, and identify potential gaps in existing instrumentation that may require further research and development. This report notes the key parameters that instrumentation needs to measure to help operators respond to severe accidents. A follow-up report will assess severe accident environmental conditions as estimated by severe accident simulation model analysis for a specific US BWR/4-Markmore » I plant for those instrumentation systems considered most important for accident management purposes.« less
Research on Radar Micro-Doppler Feature Parameter Estimation of Propeller Aircraft
NASA Astrophysics Data System (ADS)
He, Zhihua; Tao, Feixiang; Duan, Jia; Luo, Jingsheng
2018-01-01
The micro-motion modulation effect of the rotated propellers to radar echo can be a steady feature for aircraft target recognition. Thus, micro-Doppler feature parameter estimation is a key to accurate target recognition. In this paper, the radar echo of rotated propellers is modelled and simulated. Based on which, the distribution characteristics of the micro-motion modulation energy in time, frequency and time-frequency domain are analyzed. The micro-motion modulation energy produced by the scattering points of rotating propellers is accumulated using the Inverse-Radon (I-Radon) transform, which can be used to accomplish the estimation of micro-modulation parameter. Finally, it is proved that the proposed parameter estimation method is effective with measured data. The micro-motion parameters of aircraft can be used as the features of radar target recognition.
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
State and Parameter Estimation for a Coupled Ocean--Atmosphere Model
NASA Astrophysics Data System (ADS)
Ghil, M.; Kondrashov, D.; Sun, C.
2006-12-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
NASA Astrophysics Data System (ADS)
Duan, Chaowei; Zhan, Yafeng
2016-03-01
The output characteristics of a linear monostable system driven with a periodic signal and an additive white Gaussian noise are studied in this paper. Theoretical analysis shows that the output signal-to-noise ratio (SNR) decreases monotonously with the increasing noise intensity but the output SNR-gain is stable. Inspired by this high SNR-gain phenomenon, this paper applies the linear monostable system in the parameters estimation algorithm for phase shift keying (PSK) signals and improves the estimation performance.
NASA Astrophysics Data System (ADS)
Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido
2016-04-01
In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.
In flight estimations of Cassini spacecraft inertia tensor and thruster magnitude
NASA Technical Reports Server (NTRS)
Feldman, Antonette; Lee, Allan Y.
2006-01-01
This paper describes two methods used by the Cassini Attitude Control team to determine these key parameters and how flight telemetry was used to estimate them. The method for estimating the spacecraft inertia tensor exploits the conservation of angular momentum during spacecraft slews under reaction wheel control.
Post-processing procedure for industrial quantum key distribution systems
NASA Astrophysics Data System (ADS)
Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey
2016-08-01
We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.
High-Speed Quantum Key Distribution Using Photonic Integrated Circuits
2013-01-01
protocol [14] that uses energy-time entanglement of pairs of photons. We are employing the QPIC architecture to implement a novel high-dimensional disper...continuous Hilbert spaces using measures of the covariance matrix. Although we focus the discussion on a scheme employing entangled photon pairs...is the probability that parameter estimation fails [20]. The parameter ε̄ accounts for the accuracy of estimating the smooth min- entropy , which
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
Estimation of Key Parameters of the Coupled Energy and Water Model by Assimilating Land Surface Data
NASA Astrophysics Data System (ADS)
Abdolghafoorian, A.; Farhadi, L.
2017-12-01
Accurate estimation of land surface heat and moisture fluxes, as well as root zone soil moisture, is crucial in various hydrological, meteorological, and agricultural applications. Field measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state observations that are widely available from remote sensing across a range of scale. In this work, we applies the variational data assimilation approach to estimate land surface fluxes and soil moisture profile from the implicit information contained Land Surface Temperature (LST) and Soil Moisture (SM) (hereafter the VDA model). The VDA model is focused on the estimation of three key parameters: 1- neutral bulk heat transfer coefficient (CHN), 2- evaporative fraction from soil and canopy (EF), and 3- saturated hydraulic conductivity (Ksat). CHN and EF regulate the partitioning of available energy between sensible and latent heat fluxes. Ksat is one of the main parameters used in determining infiltration, runoff, groundwater recharge, and in simulating hydrological processes. In this study, a system of coupled parsimonious energy and water model will constrain the estimation of three unknown parameters in the VDA model. The profile of SM (LST) at multiple depths is estimated using moisture diffusion (heat diffusion) equation. In this study, the uncertainties of retrieved unknown parameters and fluxes are estimated from the inverse of Hesian matrix of cost function which is computed using the Lagrangian methodology. Analysis of uncertainty provides valuable information about the accuracy of estimated parameters and their correlation and guide the formulation of a well-posed estimation problem. The results of proposed algorithm are validated with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. In addition, the feasibility of extending this algorithm to use remote sensing observations that have low temporal resolution is examined by assimilating the limited number of land surface moisture and temperature observations.
Quantum hacking: Saturation attack on practical continuous-variable quantum key distribution
NASA Astrophysics Data System (ADS)
Qin, Hao; Kumar, Rupesh; Alléaume, Romain
2016-07-01
We identify and study a security loophole in continuous-variable quantum key distribution (CVQKD) implementations, related to the imperfect linearity of the homodyne detector. By exploiting this loophole, we propose an active side-channel attack on the Gaussian-modulated coherent-state CVQKD protocol combining an intercept-resend attack with an induced saturation of the homodyne detection on the receiver side (Bob). We show that an attacker can bias the excess noise estimation by displacing the quadratures of the coherent states received by Bob. We propose a saturation model that matches experimental measurements on the homodyne detection and use this model to study the impact of the saturation attack on parameter estimation in CVQKD. We demonstrate that this attack can bias the excess noise estimation beyond the null key threshold for any system parameter, thus leading to a full security break. If we consider an additional criterion imposing that the channel transmission estimation should not be affected by the attack, then the saturation attack can only be launched if the attenuation on the quantum channel is sufficient, corresponding to attenuations larger than approximately 6 dB. We moreover discuss the possible countermeasures against the saturation attack and propose a countermeasure based on Gaussian postselection that can be implemented by classical postprocessing and may allow one to distill the secret key when the raw measurement data are partly saturated.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
NASA Astrophysics Data System (ADS)
Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona
2015-04-01
Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup
2014-01-01
National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...
Parametric study of helicopter aircraft systems costs and weights
NASA Technical Reports Server (NTRS)
Beltramo, M. N.
1980-01-01
Weight estimating relationships (WERs) and recurring production cost estimating relationships (CERs) were developed for helicopters at the system level. The WERs estimate system level weight based on performance or design characteristics which are available during concept formulation or the preliminary design phase. The CER (or CERs in some cases) for each system utilize weight (either actual or estimated using the appropriate WER) and production quantity as the key parameters.
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Regularized Semiparametric Estimation for Ordinary Differential Equations
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-01-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Finite-size analysis of a continuous-variable quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe
2010-06-15
The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.
2011-01-01
Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed
NASA Astrophysics Data System (ADS)
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows
NASA Astrophysics Data System (ADS)
Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.
2016-02-01
Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2015-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
Estimating unknown parameters in haemophilia using expert judgement elicitation.
Fischer, K; Lewandowski, D; Janssen, M P
2013-09-01
The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.
El Allaki, Farouk; Harrington, Noel; Howden, Krista
2016-11-01
The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa
NASA Astrophysics Data System (ADS)
Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.
2014-12-01
Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.
Ba, Kamarel; Thiaw, Modou; Lazar, Najih; Sarr, Alassane; Brochier, Timothée; Ndiaye, Ismaïla; Faye, Alioune; Sadio, Oumar; Panfili, Jacques; Thiaw, Omar Thiom; Brehmer, Patrice
2016-01-01
The stock of the Senegalese flat sardinella, Sardinella maderensis, is highly exploited in Senegal, West Africa. Its growth and reproduction parameters are key biological indicators for improving fisheries management. This study reviewed these parameters using landing data from small-scale fisheries in Senegal and literature information dated back more than 25 years. Age was estimated using length-frequency data to calculate growth parameters and assess the growth performance index. With global climate change there has been an increase in the average sea surface temperature along the Senegalese coast but the length-weight parameters, sex ratio, size at first sexual maturity, period of reproduction and condition factor of S. maderensis have not changed significantly. The above parameters of S. maderensis have hardly changed, despite high exploitation and fluctuations in environmental conditions that affect the early development phases of small pelagic fish in West Africa. This lack of plasticity of the species regarding of the biological parameters studied should be considered when planning relevant fishery management plans.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; ...
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
NASA Astrophysics Data System (ADS)
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.
NASA Astrophysics Data System (ADS)
Bauerle, William L.; Daniels, Alex B.; Barnard, David M.
2014-05-01
Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.
Cove, Michael V.; Gardner, Beth; Simons, Theodore R.; Kays, Roland; O'Connell, Allan F.
2017-01-01
Feral and free-ranging domestic cats (Felis catus) can have strong negative effects on small mammals and birds, particularly in island ecosystems. We deployed camera traps to study free-ranging cats in national wildlife refuges and state parks on Big Pine Key and Key Largo in the Florida Keys, USA, and used spatial capture–recapture models to estimate cat abundance, movement, and activities. We also used stable isotope analyses to examine the diet of cats captured on public lands. Top population models separated cats based on differences in movement and detection with three and two latent groups on Big Pine Key and Key Largo, respectively. We hypothesize that these latent groups represent feral, semi-feral, and indoor/outdoor house cats based on the estimated movement parameters of each group. Estimated cat densities and activity varied between the two islands, with relatively high densities (~4 cats/km2) exhibiting crepuscular diel patterns on Big Pine Key and lower densities (~1 cat/km2) exhibiting nocturnal diel patterns on Key Largo. These differences are most likely related to the higher proportion of house cats on Big Pine relative to Key Largo. Carbon and nitrogen isotope ratios from hair samples of free-ranging cats (n = 43) provided estimates of the proportion of wild and anthropogenic foods in cat diets. At the population level, cats on both islands consumed mostly anthropogenic foods (>80% of the diet), but eight individuals were effective predators of wildlife (>50% of the diet). We provide evidence that cat groups within a population move different distances, exhibit different activity patterns, and that individuals consume wildlife at different rates, which all have implications for managing this invasive predator.
Geothermal Life Cycle Calculator
Sullivan, John
2014-03-11
This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE’s Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.
NASA Astrophysics Data System (ADS)
Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.
2005-05-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki
2006-01-01
We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2017-11-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
Evaluation of groundwater resources requires the knowledge of the capacity of aquifers to store and transmit ground water. This requires estimates of key hydraulic parameters, such as the transmissivity, among others. The transmissivity T (m2/sec) is a hydrauli...
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
NASA Astrophysics Data System (ADS)
Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano
2018-05-01
We present a rigorous security analysis of continuous-variable measurement-device-independent quantum key distribution (CV MDI QKD) in a finite-size scenario. The security proof is obtained in two steps: by first assessing the security against collective Gaussian attacks, and then extending to the most general class of coherent attacks via the Gaussian de Finetti reduction. Our result combines recent state-of-the-art security proofs for CV QKD with findings about min-entropy calculus and parameter estimation. In doing so, we improve the finite-size estimate of the secret key rate. Our conclusions confirm that CV MDI protocols allow for high rates on the metropolitan scale, and may achieve a nonzero secret key rate against the most general class of coherent attacks after 107-109 quantum signal transmissions, depending on loss and noise, and on the required level of security.
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
A cooperative strategy for parameter estimation in large scale systems biology models.
Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R
2012-06-22
Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
NASA Astrophysics Data System (ADS)
Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue
2018-01-01
In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.
NASA Astrophysics Data System (ADS)
Timmermans, J.; Gomez-Dans, J. L.; Verhoef, W.; Tol, C. V. D.; Lewis, P.
2017-12-01
Evapotranspiration (ET) cannot be directly measured from space. Instead it relies on modelling approaches that use several land surface parameters (LSP), LAI and LST, in conjunction with meteorological parameters. Such a modelling approach presents two caveats: the validity of the model, and the consistency between the different input parameters. Often this second step is not considered, ignoring that without good inputs no decent output can provided. When LSP- dynamics contradict each other, the output of the model cannot be representative of reality. At present however, the LSPs used in large scale ET estimations originate from different single-sensor retrieval-approaches and even from different satellite sensors. In response, the Earth Observation Land Data Assimilation System (EOLDAS) was developed. EOLDAS uses a multi-sensor approach to couple different satellite observations/types to radiative transfer models (RTM), consistently. It is therefore capable of synergistically estimating a variety of LSPs. Considering that ET is most sensitive to the temperatures of the land surface (components), the goal of this research is to expand EOLDAS to the thermal domain. This research not only focuses on estimating LST, but also on retrieving (soil/vegetation, Sunlit/shaded) component temperatures, to facilitate dual/quad-source ET models. To achieve this, The Soil Canopy Observations of Photosynthesis and Energy (SCOPE) model was integrated into EOLDAS. SCOPE couples key-parameters to key-processes, such as photosynthesis, ET and optical/thermal RT. In this research SCOPE was also coupled to MODTRAN RTM, in order to estimate BOA component temperatures directly from TOA observations. This paper presents the main modelling steps of integrating these complex models into an operational platform. In addition it highlights the actual retrieval using different satellite observations, such as MODIS and Sentinel-3, and meteorological variables from the ERA-Interim.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
NASA Astrophysics Data System (ADS)
Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel
2016-04-01
Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
A Stokes drift approximation based on the Phillips spectrum
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.
2016-04-01
A new approximation to the Stokes drift velocity profile based on the exact solution for the Phillips spectrum is explored. The profile is compared with the monochromatic profile and the recently proposed exponential integral profile. ERA-Interim spectra and spectra from a wave buoy in the central North Sea are used to investigate the behavior of the profile. It is found that the new profile has a much stronger gradient near the surface and lower normalized deviation from the profile computed from the spectra. Based on estimates from two open-ocean locations, an average value has been estimated for a key parameter of the profile. Given this parameter, the profile can be computed from the same two parameters as the monochromatic profile, namely the transport and the surface Stokes drift velocity.
Removing flicker based on sparse color correspondences in old film restoration
NASA Astrophysics Data System (ADS)
Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran
2018-04-01
In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.
Phase History Decomposition for efficient Scatterer Classification in SAR Imagery
2011-09-15
frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter
Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.
2011-01-01
Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY
Jeong, Hyeok; Townsend, Robert
2010-01-01
This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.
2016-12-01
Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.
Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.
Nuclear thermal propulsion engine system design analysis code development
NASA Astrophysics Data System (ADS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.
1992-01-01
A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Using microwave observations to estimate land surface temperature during cloudy conditions
USDA-ARS?s Scientific Manuscript database
Land surface temperature (LST), a key ingredient for physically-based retrieval algorithms of hydrological states and fluxes, remains a poorly constrained parameter for global scale studies. The main two observational methods to remotely measure T are based on thermal infrared (TIR) observations and...
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Propagation regimes and populations of internal waves in the Mediterranean Sea basin
NASA Astrophysics Data System (ADS)
Kurkina, Oxana; Rouvinskaya, Ekaterina; Talipova, Tatiana; Soomere, Tarmo
2017-02-01
The geographical and seasonal distributions of kinematic and nonlinear parameters of long internal waves are derived from the Generalized Digital Environmental Model (GDEM) climatology for the Mediterranean Sea region, including the Black Sea. The considered parameters are phase speed of long internal waves and the coefficients at the dispersion, quadratic and cubic terms of the weakly-nonlinear Korteweg-de Vries-type models (in particular, the Gardner model). These parameters govern the possible polarities and shapes of solitary internal waves, their limiting amplitudes and propagation speeds. The key outcome is an express estimate of the expected parameters of internal waves for different regions of the Mediterranean basin.
Estimation of Pre-industrial Nitrous Oxide Emission from the Terrestrial Biosphere
NASA Astrophysics Data System (ADS)
Xu, R.; Tian, H.; Lu, C.; Zhang, B.; Pan, S.; Yang, J.
2015-12-01
Nitrous oxide (N2O) is currently the third most important greenhouse gases (GHG) after methane (CH4) and carbon dioxide (CO2). Global N2O emission increased substantially primarily due to reactive nitrogen (N) enrichment through fossil fuel combustion, fertilizer production, and legume crop cultivation etc. In order to understand how climate system is perturbed by anthropogenic N2O emissions from the terrestrial biosphere, it is necessary to better estimate the pre-industrial N2O emissions. Previous estimations of natural N2O emissions from the terrestrial biosphere range from 3.3-9.0 Tg N2O-N yr-1. This large uncertainty in the estimation of pre-industrial N2O emissions from the terrestrial biosphere may be caused by uncertainty associated with key parameters such as maximum nitrification and denitrification rates, half-saturation coefficients of soil ammonium and nitrate, N fixation rate, and maximum N uptake rate. In addition to the large estimation range, previous studies did not provide an estimate on preindustrial N2O emissions at regional and biome levels. In this study, we applied a process-based coupled biogeochemical model to estimate the magnitude and spatial patterns of pre-industrial N2O fluxes at biome and continental scales as driven by multiple input data, including pre-industrial climate data, atmospheric CO2 concentration, N deposition, N fixation, and land cover types and distributions. Uncertainty associated with key parameters is also evaluated. Finally, we generate sector-based estimates of pre-industrial N2O emission, which provides a reference for assessing the climate forcing of anthropogenic N2O emission from the land biosphere.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
Two-Stage Modeling of Formaldehyde-Induced Tumor Incidence in the Rat—analysis of Uncertainties
This works extends the 2-stage cancer modeling of tumor incidence in formaldehyde-exposed rats carried out at the CIIT Centers for Health Research. We modify key assumptions, evaluate the effect of selected uncertainties, and develop confidence bounds on parameter estimates. Th...
Test Operations Procedure (TOP) 10-2-400 Open End Compressed Gas Driven Shock Tube
gas-driven shock tube. Procedures are provided for instrumentation, test item positioning, estimation of key test parameters, operation of the shock...tube, data collection, and reporting. The procedures in this document are based on the use of helium gas and Mylar film diaphragms.
The solid-phase diffusion coefficient (Dm) and material-air partition coefficient (Kma) are key parameters for characterizing the sources and transport of semivolatile organic compounds (SVOCs) in the indoor environment. In this work, a new experimental method was developed to es...
Since its amalgamation as a Federal Agency over 30 years ago, the U.S. Environmental Protection Agency (EPA) has undertaken many activities contributing to the international community's collective foundation for modern, multimedia environmental modeling. A key component of its c...
Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter
2015-06-01
Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
NASA Astrophysics Data System (ADS)
Fuchs, Christian; Poulenard, Sylvain; Perlot, Nicolas; Riedi, Jerome; Perdigues, Josep
2017-02-01
Optical satellite communications play an increasingly important role in a number of space applications. However, if the system concept includes optical links to the surface of the Earth, the limited availability due to clouds and other atmospheric impacts need to be considered to give a reliable estimate of the system performance. An OGS network is required for increasing the availability to acceptable figures. In order to realistically estimate the performance and achievable throughput in various scenarios, a simulation tool has been developed under ESA contract. The tool is based on a database of 5 years of cloud data with global coverage and can thus easily simulate different optical ground station network topologies for LEO- and GEO-to-ground links. Further parameters, like e.g. limited availability due to sun blinding and atmospheric turbulence, are considered as well. This paper gives an overview about the simulation tool, the cloud database, as well as the modelling behind the simulation scheme. Several scenarios have been investigated: LEO-to-ground links, GEO feeder links, and GEO relay links. The key results of the optical ground station network optimization and throughput estimations will be presented. The implications of key technical parameters, as e.g. memory size aboard the satellite, will be discussed. Finally, potential system designs for LEO- and GEO-systems will be presented.
Practical quantum key distribution protocol without monitoring signal disturbance.
Sasaki, Toshihiko; Yamamoto, Yoshihisa; Koashi, Masato
2014-05-22
Quantum cryptography exploits the fundamental laws of quantum mechanics to provide a secure way to exchange private information. Such an exchange requires a common random bit sequence, called a key, to be shared secretly between the sender and the receiver. The basic idea behind quantum key distribution (QKD) has widely been understood as the property that any attempt to distinguish encoded quantum states causes a disturbance in the signal. As a result, implementation of a QKD protocol involves an estimation of the experimental parameters influenced by the eavesdropper's intervention, which is achieved by randomly sampling the signal. If the estimation of many parameters with high precision is required, the portion of the signal that is sacrificed increases, thus decreasing the efficiency of the protocol. Here we propose a QKD protocol based on an entirely different principle. The sender encodes a bit sequence onto non-orthogonal quantum states and the receiver randomly dictates how a single bit should be calculated from the sequence. The eavesdropper, who is unable to learn the whole of the sequence, cannot guess the bit value correctly. An achievable rate of secure key distribution is calculated by considering complementary choices between quantum measurements of two conjugate observables. We found that a practical implementation using a laser pulse train achieves a key rate comparable to a decoy-state QKD protocol, an often-used technique for lasers. It also has a better tolerance of bit errors and of finite-sized-key effects. We anticipate that this finding will give new insight into how the probabilistic nature of quantum mechanics can be related to secure communication, and will facilitate the simple and efficient use of conventional lasers for QKD.
NASA Astrophysics Data System (ADS)
Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin
2016-12-01
Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.
Quantum-enhanced multiparameter estimation in multiarm interferometers
Ciampini, Mario A.; Spagnolo, Nicolò; Vitelli, Chiara; Pezzè, Luca; Smerzi, Augusto; Sciarrino, Fabio
2016-01-01
Quantum metrology is the state-of-the-art measurement technology. It uses quantum resources to enhance the sensitivity of phase estimation over that achievable by classical physics. While single parameter estimation theory has been widely investigated, much less is known about the simultaneous estimation of multiple phases, which finds key applications in imaging and sensing. In this manuscript we provide conditions of useful particle (qudit) entanglement for multiphase estimation and adapt them to multiarm Mach-Zehnder interferometry. We theoretically discuss benchmark multimode Fock states containing useful qudit entanglement and overcoming the sensitivity of separable qudit states in three and four arm Mach-Zehnder-like interferometers - currently within the reach of integrated photonics technology. PMID:27381743
Analysis of Partitioned Methods for the Biot System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bukac, Martina; Layton, William; Moraiti, Marina
2015-02-18
In this work, we present a comprehensive study of several partitioned methods for the coupling of flow and mechanics. We derive energy estimates for each method for the fully-discrete problem. We write the obtained stability conditions in terms of a key control parameter defined as a ratio of the coupling strength and the speed of propagation. Depending on the parameters in the problem, give the choice of the partitioned method which allows the largest time step. (C) 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Automated inference procedure for the determination of cell growth parameters
NASA Astrophysics Data System (ADS)
Harris, Edouard A.; Koh, Eun Jee; Moffat, Jason; McMillen, David R.
2016-01-01
The growth rate and carrying capacity of a cell population are key to the characterization of the population's viability and to the quantification of its responses to perturbations such as drug treatments. Accurate estimation of these parameters necessitates careful analysis. Here, we present a rigorous mathematical approach for the robust analysis of cell count data, in which all the experimental stages of the cell counting process are investigated in detail with the machinery of Bayesian probability theory. We advance a flexible theoretical framework that permits accurate estimates of the growth parameters of cell populations and of the logical correlations between them. Moreover, our approach naturally produces an objective metric of avoidable experimental error, which may be tracked over time in a laboratory to detect instrumentation failures or lapses in protocol. We apply our method to the analysis of cell count data in the context of a logistic growth model by means of a user-friendly computer program that automates this analysis, and present some samples of its output. Finally, we note that a traditional least squares fit can provide misleading estimates of parameter values, because it ignores available information with regard to the way in which the data have actually been collected.
Revised age estimates of the Euphrosyne family
NASA Astrophysics Data System (ADS)
Carruba, Valerio; Masiero, Joseph R.; Cibulková, Helena; Aljbaae, Safwan; Espinoza Huaman, Mariela
2015-08-01
The Euphrosyne family, a high inclination asteroid family in the outer main belt, is considered one of the most peculiar groups of asteroids. It is characterized by the steepest size frequency distribution (SFD) among families in the main belt, and it is the only family crossed near its center by the ν6 secular resonance. Previous studies have shown that the steep size frequency distribution may be the result of the dynamical evolution of the family.In this work we further explore the unique dynamical configuration of the Euphrosyne family by refining the previous age values, considering the effects of changes in shapes of the asteroids during YORP cycle (``stochastic YORP''), the long-term effect of close encounters of family members with (31) Euphrosyne itself, and the effect that changing key parameters of the Yarkovsky force (such as density and thermal conductivity) has on the estimate of the family age obtained using Monte Carlo methods. Numerical simulations accounting for the interaction with the local web of secular and mean-motion resonances allow us to refine previous estimates of the family age. The cratering event that formed the Euphrosyne family most likely occurred between 560 and 1160 Myr ago, and no earlier than 1400 Myr ago when we allow for larger uncertainties in the key parameters of the Yarkovsky force.
Generating multi-scale albedo look-up maps using MODIS BRDF/Albedo products and landsat imagery
USDA-ARS?s Scientific Manuscript database
Surface albedo determines radiative forcing and is a key parameter for driving Earth’s climate. Better characterization of surface albedo for individual land cover types can reduce the uncertainty in estimating changes to Earth’s radiation balance due to land cover change. This paper presents a mult...
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
Percha, Bethany; Newman, M. E. J.; Foxman, Betsy
2012-01-01
Group B Streptococcus (GBS) remains a major cause of neonatal sepsis and is an emerging cause of invasive bacterial infections. The 9 known serotypes vary in virulence, and there is little cross-immunity. Key parameters for planning an effective vaccination strategy, such as average length of immunity and transmission probabilities by serotype, are unknown. We simulated GBS spread in a population using a computational model with parameters derived from studies of GBS sexual transmission in a college dormitory. Here we provide estimates of the duration of immunity relative to the transmission probabilities for the 3 GBS serotypes most associated with invasive disease: Ia, III, and V. We also place upper limits on the durations of immunity for serotype Ia (570 days), III (1125 days) and V (260 days). Better transmission estimates are required to establish the epidemiological parameters of GBS infection and determine the best vaccination strategies to prevent GBS disease. PMID:21605704
Models based on value and probability in health improve shared decision making.
Ortendahl, Monica
2008-10-01
Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.
Finite-key analysis for measurement-device-independent quantum key distribution.
Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong
2014-04-29
Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.
Numerical approach for unstructured quantum key distribution
Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert
2016-01-01
Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Analysis options for estimating status and trends in long-term monitoring
Bart, Jonathan; Beyer, Hawthorne L.
2012-01-01
This chapter describes methods for estimating long-term trends in ecological parameters. Other chapters in this volume discuss more advanced methods for analyzing monitoring data, but these methods may be relatively inaccessible to some readers. Therefore, this chapter provides an introduction to trend analysis for managers and biologists while also discussing general issues relevant to trend assessment in any long-term monitoring program. For simplicity, we focus on temporal trends in population size across years. We refer to the survey results for each year as the “annual means” (e.g. mean per transect, per plot, per time period). The methods apply with little or no modification, however, to formal estimates of population size, other temporal units (e.g. a month), to spatial or other dimensions such as elevation or a north–south gradient, and to other quantities such as chemical or geological parameters. The chapter primarily discusses methods for estimating population-wide parameters rather than studying variation in trend within the population, which can be examined using methods presented in other chapters (e.g. Chapters 7, 12, 20). We begin by reviewing key concepts related to trend analysis. We then describe how to evaluate potential bias in trend estimates. An overview of the statistical models used to quantify trends is then presented. We conclude by showing ways to estimate trends using simple methods that can be implemented with spreadsheets.
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
[Exploring novel hyperspectral band and key index for leaf nitrogen accumulation in wheat].
Yao, Xia; Zhu, Yan; Feng, Wei; Tian, Yong-Chao; Cao, Wei-Xing
2009-08-01
The objectives of the present study were to explore new sensitive spectral bands and ratio spectral indices based on precise analysis of ground-based hyperspectral information, and then develop regression model for estimating leaf N accumulation per unit soil area (LNA) in winter wheat (Triticum aestivum L.). Three field experiments were conducted with different N rates and cultivar types in three consecutive growing seasons, and time-course measurements were taken on canopy hyperspectral reflectance and LNA tinder the various treatments. By adopting the method of reduced precise sampling, the detailed ratio spectral indices (RSI) within the range of 350-2 500 nm were constructed, and the quantitative relationships between LNA (gN m(-2)) and RSI (i, j) were analyzed. It was found that several key spectral bands and spectral indices were suitable for estimating LNA in wheat, and the spectral parameter RSI (990, 720) was the most reliable indicator for LNA in wheat. The regression model based on the best RSI was formulated as y = 5.095x - 6.040, with R2 of 0.814. From testing of the derived equations with independent experiment data, the model on RSI (990, 720) had R2 of 0.847 and RRMSE of 24.7%. Thus, it is concluded that the present hyperspectral parameter of RSI (990, 720) and derived regression model can be reliably used for estimating LNA in winter wheat. These results provide the feasible key bands and technical basis for developing the portable instrument of monitoring wheat nitrogen status and for extracting useful spectral information from remote sensing images.
Choi, Chang-Yong; Lee, Ki-Sup; Poyarkov, Nikolay D.; Park, Jin-Young; Lee, Hansoo; Takekawa, John Y.; Smith, Lacy M.; Ely, Craig R.; Wang, Xin; Cao, Lei; Fox, Anthony D.; Goroshko, Oleg; Batbayar, Nyambayar; Prosser, Diann J.; Xiao, Xiangming
2016-01-01
Waterbird survival rates are a key component of demographic modeling used for effective conservation of long-lived threatened species. The Swan Goose (Anser cygnoides) is globally threatened and the most vulnerable goose species endemic to East Asia due to its small and rapidly declining population. To address a current knowledge gap in demographic parameters of the Swan Goose, available datasets were compiled from neck-collar resighting and telemetry studies, and two different models were used to estimate their survival rates. Results of a mark-resighting model using 15 years of neck-collar data (2001–2015) provided age-dependent survival rates and season-dependent encounter rates with a constant neck-collar retention rate. Annual survival rate was 0.638 (95% CI: 0.378–0.803) for adults and 0.122 (95% CI: 0.028–0.286) for first-year juveniles. Known-fate models were applied to the single season of telemetry data (autumn 2014) and estimated a mean annual survival rate of 0.408 (95% CI: 0.152–0.670) with higher but non-significant differences for adults (0.477) vs. juveniles (0.306). Our findings indicate that Swan Goose survival rates are comparable to the lowest rates reported for European or North American goose species. Poor survival may be a key demographic parameter contributing to their declining trend. Quantitative threat assessments and associated conservation measures, such as restricting hunting, may be a key step to mitigate for their low survival rates and maintain or enhance their population.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
An international database of radionuclide concentration ratios for wildlife: development and uses.
Copplestone, D; Beresford, N A; Brown, J E; Yankovich, T
2013-12-01
A key element of most systems for assessing the impact of radionuclides on the environment is a means to estimate the transfer of radionuclides to organisms. To facilitate this, an international wildlife transfer database has been developed to provide an online, searchable compilation of transfer parameters in the form of equilibrium-based whole-organism to media concentration ratios. This paper describes the derivation of the wildlife transfer database, the key data sources it contains and highlights the applications for the data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Digital Model of Fourier and Fresnel Quantized Holograms
NASA Astrophysics Data System (ADS)
Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.
Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.
A modal parameter extraction procedure applicable to linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Craig, R. R., Jr.
1985-01-01
Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.
Are camera surveys useful for assessing recruitment in white-tailed deer?
M. Colter Chitwood; Marcus A. Lashley; John C. Kilgo; Michael J. Cherry; L. Mike Conner; Mark Vukovich; H. Scott Ray; Charles Ruth; Robert J. Warren; Christopher S. DePerno; Christopher E. Moorman
2017-01-01
Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter...
Leaf area and its spatial distribution are key parameters in describing canopy characteristics. They determine radiation regimes and influence mass and energy exchange with the atmosphere. The evaluation of leaf area in conifer stands is particularly challengi...
USDA-ARS?s Scientific Manuscript database
In California and other regions vulnerable to water shortages, satellite-derived estimates of key hydrologic parameters can support agricultural producers and water managers in maximizing the benefits of available water supplies. The Satellite Irrigation Management Support (SIMS) project combines N...
Accounting for ethnicity in recreation demand: a flexible count data approach
J. Michael Bowker; V.R. Leeworthy
1998-01-01
The authors examine ethnicity and individual trip-taking behavior associated with natural resource based recreation in the Florida Keys. Bowker and Leeworthy estimate trip demand using the travel cost method. They then extend this model with a varying parameter adaptation to test the congruency of' demand and economic value across white and Hispanic user subgroups...
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Phenological Parameters Estimation Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.
2010-01-01
The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less
A robust nonlinear position observer for synchronous motors with relaxed excitation conditions
NASA Astrophysics Data System (ADS)
Bobtsov, Alexey; Bazylev, Dmitry; Pyrkin, Anton; Aranovskiy, Stanislav; Ortega, Romeo
2017-04-01
A robust, nonlinear and globally convergent rotor position observer for surface-mounted permanent magnet synchronous motors was recently proposed by the authors. The key feature of this observer is that it requires only the knowledge of the motor's resistance and inductance. Using some particular properties of the mathematical model it is shown that the problem of state observation can be translated into one of estimation of two constant parameters, which is carried out with a standard gradient algorithm. In this work, we propose to replace this estimator with a new one called dynamic regressor extension and mixing, which has the following advantages with respect to gradient estimators: (1) the stringent persistence of excitation (PE) condition of the regressor is not necessary to ensure parameter convergence; (2) the latter is guaranteed requiring instead a non-square-integrability condition that has a clear physical meaning in terms of signal energy; (3) if the regressor is PE, the new observer (like the old one) ensures convergence is exponential, entailing some robustness properties to the observer; (4) the new estimator includes an additional filter that constitutes an additional degree of freedom to satisfy the non-square integrability condition. Realistic simulation results show significant performance improvement of the position observer using the new parameter estimator, with a less oscillatory behaviour and a faster convergence speed.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Assaying Mitochondrial Respiration as an Indicator of Cellular Metabolism and Fitness.
Smolina, Natalia; Bruton, Joseph; Kostareva, Anna; Sejersen, Thomas
2017-01-01
Mitochondrial respiration is the most important generator of cellular energy under most circumstances. It is a process of energy conversion of substrates into ATP. The Seahorse equipment allows measuring oxygen consumption rate (OCR) in living cells and estimates key parameters of mitochondrial respiration in real-time mode. Through use of mitochondrial inhibitors, four key mitochondrial respiration parameters can be measured: basal, ATP production-linked, maximal, and proton leak-linked OCR. This approach requires application of mitochondrial inhibitors-oligomycin to block ATP synthase, FCCP-to make the inner mitochondrial membrane permeable for protons and allow maximum electron flux through the electron transport chain, and rotenone and antimycin A-to inhibit complexes I and III, respectively. This chapter describes the protocol of OCR assessment in the culture of primary myotubes obtained upon satellite cell fusion.
Artim, J M; Sikkel, P C
2016-08-01
Characterizing spatio-temporal variation in the density of organisms in a community is a crucial part of ecological study. However, doing so for small, motile, cryptic species presents multiple challenges, especially where multiple life history stages are involved. Gnathiid isopods are ecologically important marine ectoparasites, micropredators that live in substrate for most of their lives, emerging only once during each juvenile stage to feed on fish blood. Many gnathiid species are nocturnal and most have distinct substrate preferences. Studies of gnathiid use of habitat, exploitation of hosts, and population dynamics have used various trap designs to estimate rates of gnathiid emergence, study sensory ecology, and identify host susceptibility. In the studies reported here, we compare and contrast the performance of emergence, fish-baited and light trap designs, outline the key features of these traps, and determine some life cycle parameters derived from trap counts for the Eastern Caribbean coral-reef gnathiid, Gnathia marleyi. We also used counts from large emergence traps and light traps to estimate additional life cycle parameters, emergence rates, and total gnathiid density on substrate, and to calibrate the light trap design to provide estimates of rate of emergence and total gnathiid density in habitat not amenable to emergence trap deployment.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
Yulong Zhang; Conghe Song; Ge Sun; Lawrence E. Band; Asko Noormets; Quanfa Zhang
2015-01-01
Light use efficiency (LUE) is a key biophysical parameter characterizing the ability of plants to convert absorbed light to carbohydrate. However, the environmental regulations on LUE, especially moisture stress, are poorly understood, leading to large uncertainties in primary productivity estimated by LUE models. The objective of this study is to investigate the...
LiDAR-derived site index in the U.S. Pacihic Northwest--challenges and opportunities
Demetrios Gatziolis
2007-01-01
Site Index (SI), a key inventory parameter, is traditionally estimated by using costly and laborious field assessments of tree height and age. The increasing availability of reliable information on stand initiation timing and extent of planted, even-aged stands maintained in digital databases suggests that information on the height of dominant trees suffices for...
Special Workshop: Kolsky/Split Hopkinson Pressure Bar Testing of Ceramics
2006-09-01
merit) control armor performance and how these properties are controlled/ influenced by intrinsic (crystal structure, phase transitions, and single...reproducibility uncertainties (estimates of precision) could be quantified and also identify key parameters that should be controlled in SHPB/Kolsky testing...control performance? Are there figures of merit? How are the mechanical properties influenced by intrinsic and extrinsic material characteristics
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
Non-stationary (13)C-metabolic flux ratio analysis.
Hörl, Manuel; Schnidder, Julian; Sauer, Uwe; Zamboni, Nicola
2013-12-01
(13)C-metabolic flux analysis ((13)C-MFA) has become a key method for metabolic engineering and systems biology. In the most common methodology, fluxes are calculated by global isotopomer balancing and iterative fitting to stationary (13)C-labeling data. This approach requires a closed carbon balance, long-lasting metabolic steady state, and the detection of (13)C-patterns in a large number of metabolites. These restrictions mostly reduced the application of (13)C-MFA to the central carbon metabolism of well-studied model organisms grown in minimal media with a single carbon source. Here we introduce non-stationary (13)C-metabolic flux ratio analysis as a novel method for (13)C-MFA to allow estimating local, relative fluxes from ultra-short (13)C-labeling experiments and without the need for global isotopomer balancing. The approach relies on the acquisition of non-stationary (13)C-labeling data exclusively for metabolites in the proximity of a node of converging fluxes and a local parameter estimation with a system of ordinary differential equations. We developed a generalized workflow that takes into account reaction types and the availability of mass spectrometric data on molecular ions or fragments for data processing, modeling, parameter and error estimation. We demonstrated the approach by analyzing three key nodes of converging fluxes in central metabolism of Bacillus subtilis. We obtained flux estimates that are in agreement with published results obtained from steady state experiments, but reduced the duration of the necessary (13)C-labeling experiment to less than a minute. These results show that our strategy enables to formally estimate relative pathway fluxes on extremely short time scale, neglecting cellular carbon balancing. Hence this approach paves the road to targeted (13)C-MFA in dynamic systems with multiple carbon sources and towards rich media. © 2013 Wiley Periodicals, Inc.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
A Bayesian approach for parameter estimation and prediction using a computationally intensive model
Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...
2015-02-05
Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less
Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration
NASA Astrophysics Data System (ADS)
Wells, B.; Toniolo, H. A.; Stuefer, S. L.
2015-12-01
Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.
Boschetti, Lucio; Ottavian, Matteo; Facco, Pierantonio; Barolo, Massimiliano; Serva, Lorenzo; Balzan, Stefania; Novelli, Enrico
2013-11-01
The use of near-infrared spectroscopy (NIRS) is proposed in this study for the characterization of the quality parameters of a smoked and dry-cured meat product known as Bauernspeck (originally from Northern Italy), as well as of some technological traits of the pork carcass used for its manufacturing. In particular, NIRS is shown to successfully estimate several key quality parameters (including water activity, moisture, dry matter, ash and protein content), suggesting its suitability for real time application in replacement of expensive and time consuming chemical analysis. Furthermore, a correlative approach based on canonical correlation analysis was used to investigate the spectral regions that are mostly correlated to the characteristics of interest. The identification of these regions, which can be linked to the absorbance of the main functional chemical groups, is intended to provide a better understanding of the chemical structure of the substrate under investigation. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2003-01-01
An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.
Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.
Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L
2018-01-01
Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1
Model reduction for experimental thermal characterization of a holding furnace
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2017-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Statistical fusion of continuous labels: identification of cardiac landmarks
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.
2011-03-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F
In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less
Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.
Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A
2011-01-01
Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.
Khan, Faisal Nadeem; Zhong, Kangping; Zhou, Xian; Al-Arashi, Waled Hussein; Yu, Changyuan; Lu, Chao; Lau, Alan Pak Tao
2017-07-24
We experimentally demonstrate the use of deep neural networks (DNNs) in combination with signals' amplitude histograms (AHs) for simultaneous optical signal-to-noise ratio (OSNR) monitoring and modulation format identification (MFI) in digital coherent receivers. The proposed technique automatically extracts OSNR and modulation format dependent features of AHs, obtained after constant modulus algorithm (CMA) equalization, and exploits them for the joint estimation of these parameters. Experimental results for 112 Gbps polarization-multiplexed (PM) quadrature phase-shift keying (QPSK), 112 Gbps PM 16 quadrature amplitude modulation (16-QAM), and 240 Gbps PM 64-QAM signals demonstrate OSNR monitoring with mean estimation errors of 1.2 dB, 0.4 dB, and 1 dB, respectively. Similarly, the results for MFI show 100% identification accuracy for all three modulation formats. The proposed technique applies deep machine learning algorithms inside standard digital coherent receiver and does not require any additional hardware. Therefore, it is attractive for cost-effective multi-parameter estimation in next-generation elastic optical networks (EONs).
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires.
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-02-10
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-01-01
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic. PMID:28208631
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Petropoulos, George; Wooster, Martin J.; Carlson, Toby N.; Drake, Nick
2010-05-01
Accurate information on spatially explicit distributed estimates of key land-atmosphere fluxes and related land surface parameters is of key importance in a range of disciplines including hydrology, meteorology, agriculture and ecology. Estimation of those parameters from remote sensing frequently employs the integration of such data with mathematical representations of the transfers of energy, mass and radiation between soil, vegetation and atmosphere continuum, known as Soil Vegetation Atmosphere Transfer (SVAT) models. The ability of one such inversion modelling scheme to resolve for key surface energy fluxes and of soil surface moisture content is examined here using data from a multispectral high spatial resolution imaging instrument, the Advanced Spaceborne Thermal Emission and Reflection Scanning Radiometer (ASTER) and SimSphere one-dimensional SVAT model. Accuracy of the investigated methodology, so-called as the "triangle" method, is verified using validated ground observations obtained from selected days collected from nine CARBOEUROPE IP sites representing a variety of climatic, topographic and environmental conditions. Subsequently, a new framework is suggested for the retrieval of two additional parameters by the investigated method, namely the Evaporative (EF) and the Non-Evaporative (NEF) Fractions. Results indicated a close agreement between the inverted surface fluxes and surface moisture availability maps as well as of the EF and NEF parameters with the observations both spatially and temporally with accuracies comparable to those obtained in similar experiments with high spatial resolution data. Inspection of the inverted surface fluxes maps regionally, showed an explainable distribution in the range of the inverted parameters in relation with the surface heterogeneity. Overall performance of the "triangle" inversion methodology was found to be affected predominantly by the SVAT model "correct" initialisation representative of the test site environment, most importantly the atmospheric conditions required in the SVAT model initial conditions. This study represents the first comprehensive evaluation of the performance of this particular methodological implementation at a European setting using the SimSphere SVAT with the ASTER data. The present work is also very timely in that, a variation of this specific inversion methodology has been proposed for the operational retrieval of the soil surface moisture content by National Polar-orbiting Operational Environmental Satellite System (NPOESS), in a series of satellite platforms that are due to be launched in the next 12 years starting from 2012. KEYWORDS: micrometeorology, surface heat fluxes, soil moisture content, ASTER, triangle method, SimSphere, CarboEurope IP
Implementation of the Global Parameters Determination in Gaia's Astrometric Solution (AGIS)
NASA Astrophysics Data System (ADS)
Raison, F.; Olias, A.; Hobbs, D.; Lindegren, L.
2010-12-01
Gaia is ESA’s space astrometry mission with a foreseen launch date in early 2012. Its main objective is to perform a stellar census of the 1000 Million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS). A core part of AGIS is to determine the accurate spacecraft attitude, geometric instrument calibration and astrometric model parameters for a well-behaved subset of all the objects (the ‘primary stars’). In addition, a small number of global parameters will be estimated, one of these being PPN γ. We present here the implementation of the algorithms dedicated to the determination of the global parameters.
Sensitivity study of Space Station Freedom operations cost and selected user resources
NASA Technical Reports Server (NTRS)
Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy
1990-01-01
The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
White paper updating conclusions of 1998 ILAW performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANN, F.M.
The purpose of this document is to provide a comparison of the estimated immobilized low-activity waste (LAW) disposal system performance against established performance objectives using the beat estimates for parameters and models to describe the system. The principal advances in knowledge since the last performance assessment (known as the 1998 ILAW PA [Mann 1998a]) have been in site specific information and data on the waste form performance for BNFL, Inc. relevant glass formulations. The white paper also estimates the maximum release rates for technetium and other key radionuclides and chemicals from the waste form. Finally, this white paper provides limitedmore » information on the impact of changes in waste form loading.« less
A numerical testbed for hypotheses of extraterrestrial life and intelligence
NASA Astrophysics Data System (ADS)
Forgan, D. H.
2009-04-01
The search for extraterrestrial intelligence (SETI) has been heavily influenced by solutions to the Drake Equation, which returns an integer value for the number of communicating civilizations resident in the Milky Way, and by the Fermi Paradox, glibly stated as: ‘If they are there, where are they?’. Both rely on using average values of key parameters, such as the mean signal lifetime of a communicating civilization. A more accurate answer must take into account the distribution of stellar, planetary and biological attributes in the galaxy, as well as the stochastic nature of evolution itself. This paper outlines a method of Monte Carlo realization that does this, and hence allows an estimation of the distribution of key parameters in SETI, as well as allowing a quantification of their errors (and the level of ignorance therein). Furthermore, it provides a means for competing theories of life and intelligence to be compared quantitatively.
Agüera, Antonio; Collard, Marie; Jossart, Quentin; Moreau, Camille; Danis, Bruno
2015-01-01
Marine organisms in Antarctica are adapted to an extreme ecosystem including extremely stable temperatures and strong seasonality due to changes in day length. It is now largely accepted that Southern Ocean organisms are particularly vulnerable to global warming with some regions already being challenged by a rapid increase of temperature. Climate change affects both the physical and biotic components of marine ecosystems and will have an impact on the distribution and population dynamics of Antarctic marine organisms. To predict and assess the effect of climate change on marine ecosystems a more comprehensive knowledge of the life history and physiology of key species is urgently needed. In this study we estimate the Dynamic Energy Budget (DEB) model parameters for key benthic Antarctic species the sea star Odontaster validus using available information from literature and experiments. The DEB theory is unique in capturing the metabolic processes of an organism through its entire life cycle as a function of temperature and food availability. The DEB model allows for the inclusion of the different life history stages, and thus, becomes a tool that can be used to model lifetime feeding, growth, reproduction, and their responses to changes in biotic and abiotic conditions. The DEB model presented here includes the estimation of reproduction handling rules for the development of simultaneous oocyte cohorts within the gonad. Additionally it links the DEB model reserves to the pyloric caeca an organ whose function has long been ascribed to energy storage. Model parameters described a slowed down metabolism of long living animals that mature slowly. O. validus has a large reserve that-matching low maintenance costs- allow withstanding long periods of starvation. Gonad development is continuous and individual cohorts developed within the gonads grow in biomass following a power function of the age of the cohort. The DEB model developed here for O. validus allowed us to increase our knowledge on the ecophysiology of this species, providing new insights on the role of food availability and temperature on its life cycle and reproduction strategy.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
NASA Astrophysics Data System (ADS)
Chen, Yanling; Gong, Adu; Li, Jing; Wang, Jingmei
2017-04-01
Accurate crop growth monitoring and yield predictive information are significant to improve the sustainable development of agriculture and ensure the security of national food. Remote sensing observation and crop growth simulation models are two new technologies, which have highly potential applications in crop growth monitoring and yield forecasting in recent years. However, both of them have limitations in mechanism or regional application respectively. Remote sensing information can not reveal crop growth and development, inner mechanism of yield formation and the affection of environmental meteorological conditions. Crop growth simulation models have difficulties in obtaining data and parameterization from single-point to regional application. In order to make good use of the advantages of these two technologies, the coupling technique of remote sensing information and crop growth simulation models has been studied. Filtering and optimizing model parameters are key to yield estimation by remote sensing and crop model based on regional crop assimilation. Winter wheat of GaoCheng was selected as the experiment object in this paper. And then the essential data was collected, such as biochemical data and farmland environmental data and meteorological data about several critical growing periods. Meanwhile, the image of environmental mitigation small satellite HJ-CCD was obtained. In this paper, research work and major conclusions are as follows. (1) Seven vegetation indexes were selected to retrieve LAI, and then linear regression model was built up between each of these indexes and the measured LAI. The result shows that the accuracy of EVI model was the highest (R2=0.964 at anthesis stage and R2=0.920 at filling stage). Thus, EVI as the most optimal vegetation index to predict LAI in this paper. (2) EFAST method was adopted in this paper to conduct the sensitive analysis to the 26 initial parameters of the WOFOST model and then a sensitivity index was constructed to evaluate the influence of each parameter mentioned above on the winter wheat yield formation. Finally, six parameters that sensitivity index more than 0.1 as sensitivity factors were chose, which are TSUM1, SLATB1, SLATB2, SPAN, EFFTB3 and TMPF4. To other parameters, we confirmed them via practical measurement and calculation, available literature or WOFOST default. Eventually, we completed the regulation of WOFOST parameters. (3) Look-up table algorithm was used to realize single-point yield estimation through the assimilation of the WOFOST model and the retrieval LAI. This simulation achieved a high accuracy which perfectly meet the purpose of assimilation (R2=0.941 and RMSE=194.58kg/hm2). In this paper, the optimum value of sensitivity parameters were confirmed and the estimation of single-point yield were finished. Key words: yield estimation of winter wheat, LAI, WOFOST crop growth model, assimilation
Hutson, Alan D
2018-01-01
In this note, we develop a new and novel semi-parametric estimator of the survival curve that is comparable to the product-limit estimator under very relaxed assumptions. The estimator is based on a beta parametrization that warps the empirical distribution of the observed censored and uncensored data. The parameters are obtained using a pseudo-maximum likelihood approach adjusting the survival curve accounting for the censored observations. In the univariate setting, the new estimator tends to better extend the range of the survival estimation given a high degree of censoring. However, the key feature of this paper is that we develop a new two-group semi-parametric exact permutation test for comparing survival curves that is generally superior to the classic log-rank and Wilcoxon tests and provides the best global power across a variety of alternatives. The new test is readily extended to the k group setting. PMID:26988931
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Heritability in the genomics era--concepts and misconceptions.
Visscher, Peter M; Hill, William G; Wray, Naomi R
2008-04-01
Heritability allows a comparison of the relative importance of genes and environment to the variation of traits within and across populations. The concept of heritability and its definition as an estimable, dimensionless population parameter was introduced by Sewall Wright and Ronald Fisher nearly a century ago. Despite continuous misunderstandings and controversies over its use and application, heritability remains key to the response to selection in evolutionary biology and agriculture, and to the prediction of disease risk in medicine. Recent reports of substantial heritability for gene expression and new estimation methods using marker data highlight the relevance of heritability in the genomics era.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Abad-Franch, Fernando; Ferraz, Gonçalo; Campos, Ciro; Palomeque, Francisco S.; Grijalva, Mario J.; Aguilar, H. Marcelo; Miles, Michael A.
2010-01-01
Background Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae), the most important vectors of Chagas disease (CD) in northern South America. Methodology/Principal Findings The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40–60%) across regions, and well above the observed infestation rate (24%). Detection probability is higher (∼0.55 on average) in the richest-soil region than elsewhere (∼0.08). Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height) explain most of infestation rate variation. Conclusions/Significance Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation in rural areas, but become rarer in heavily disturbed urban settings. Our methodological approach has wide application in infectious disease research; by improving eco-epidemiological parameter estimation, it can also significantly strengthen vector surveillance-control strategies. PMID:20209149
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
The impact of temporal sampling resolution on parameter inference for biological transport models.
Harrison, Jonathan U; Baker, Ruth E
2018-06-25
Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.
Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D
2015-01-01
Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.
Laitner, John; Silverman, Dan
2012-01-01
This paper proposes and analyzes a Social Security reform in which individuals no longer face the OASI payroll tax after, say, age 54 or a career of 34 years, and their subsequent earnings have no bearing on their benefits. We first estimate parameters of a life–cycle model. Our specification includes non-separable preferences and possible disability. It predicts a consumption–expenditure change at retirement. We use the magnitude of the expenditure change, together with households’ retirement–age decisions, to identify key structural parameters. The estimated magnitude of the change in consumption–expenditure depends importantly on the treatment of consumption by adult children of the household. Simulations indicate that the reform could increase retirement ages one year or more, equivalent variations could average more than $4,000 per household, and income tax revenues per household could increase by more than $14,000. PMID:23729902
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
On the issues of probability distribution of GPS carrier phase observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS
Comment on "High resolution coherence analysis between planetary and climate oscillations"
NASA Astrophysics Data System (ADS)
Holm, Sverre
2018-07-01
The paper by Scafetta entitled "High resolution coherence analysis between planetary and climate oscillations", May 2016 claims coherence between planetary movements and the global temperature anomaly. The claim is based on data analysis using the canonical covariance analysis (CCA) estimator for the magnitude squared coherence (MSC). It assumes a model with a predetermined number of sinusoids for the climate data. The results are highly dependent on this prior assumption, and may therefore be criticized for being based on the opposite of a null hypothesis. More importantly, since values of key parameters in the CCA method are not given, some experiments have been performed using the software of the original authors of the CCA estimator. The purpose was to replicate the results of Scafetta using what was perceived to be the most probable parameter values. Despite best efforts, this was not possible.
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
NASA Astrophysics Data System (ADS)
Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Zeng, Guihua
2018-01-01
In a practical continuous-variable quantum key distribution (CVQKD) system, real-time shot-noise measurement (RTSNM) is an essential procedure for preventing the eavesdropper exploiting the practical security loopholes. However, the performance of this procedure itself is not analyzed under the real-world condition. Therefore, we indicate the RTSNM practical performance and investigate its effects on the CVQKD system. In particular, due to the finite-size effect, the shot-noise measurement at the receiver's side may decrease the precision of parameter estimation and consequently result in a tight security bound. To mitigate that, we optimize the block size for RTSNM under the ensemble size limitation to maximize the secure key rate. Moreover, the effect of finite dynamics of amplitude modulator in this scheme is studied and its mitigation method is also proposed. Our work indicates the practical performance of RTSNM and provides the real secret key rate under it.
Modern methods for the quality management of high-rate melt solidification
NASA Astrophysics Data System (ADS)
Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.
2016-12-01
The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.
Report of the Horizontal Launch Study
NASA Technical Reports Server (NTRS)
Wilhite, Alan W.; Bartolotta, Paul A.
2011-01-01
A study of horizontal launch concepts has been conducted. This study, jointly sponsored by the Defense Advanced Research Projects Agency (DARPA) and the National Aeronautics and Space Administration (NASA) was tasked to estimate the economic and technical viability of horizontal launch approaches. The study team identified the key parameters and critical technologies which determine mission viability and reported on the state of the art of critical technologies, along with objectives for technology development.
2010-09-19
estimated directly form the surveillance data Infection control measures were implemented in the form of health care worker hand - hygiene before and after...hospital infections , is used to motivate possibilities of modeling nosocomial infec- tion dynamics. This is done in the context of hospital monitoring and...model development. Key Words: Delay equations, discrete events, nosocomial infection dynamics, surveil- lance data, inverse problems, parameter
TORABIPOUR, Amin; ZERAATI, Hojjat; ARAB, Mohammad; RASHIDIAN, Arash; AKBARI SARI, Ali; SARZAIEM, Mahmuod Reza
2016-01-01
Background: To determine the hospital required beds using stochastic simulation approach in cardiac surgery departments. Methods: This study was performed from Mar 2011 to Jul 2012 in three phases: First, collection data from 649 patients in cardiac surgery departments of two large teaching hospitals (in Tehran, Iran). Second, statistical analysis and formulate a multivariate linier regression model to determine factors that affect patient's length of stay. Third, develop a stochastic simulation system (from admission to discharge) based on key parameters to estimate required bed capacity. Results: Current cardiac surgery department with 33 beds can only admit patients in 90.7% of days. (4535 d) and will be required to over the 33 beds only in 9.3% of days (efficient cut off point). According to simulation method, studied cardiac surgery department will requires 41–52 beds for admission of all patients in the 12 next years. Finally, one-day reduction of length of stay lead to decrease need for two hospital beds annually. Conclusion: Variation of length of stay and its affecting factors can affect required beds. Statistic and stochastic simulation model are applied and useful methods to estimate and manage hospital beds based on key hospital parameters. PMID:27957466
NASA Astrophysics Data System (ADS)
Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.
2017-11-01
Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.
Reproducibility of isopach data and estimates of dispersal and eruption volumes
NASA Astrophysics Data System (ADS)
Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.
2012-12-01
Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Wang, Lu; Xu, Lisheng; Feng, Shuting; Meng, Max Q-H; Wang, Kuanquan
2013-11-01
Analysis of pulse waveform is a low cost, non-invasive method for obtaining vital information related to the conditions of the cardiovascular system. In recent years, different Pulse Decomposition Analysis (PDA) methods have been applied to disclose the pathological mechanisms of the pulse waveform. All these methods decompose single-period pulse waveform into a constant number (such as 3, 4 or 5) of individual waves. Furthermore, those methods do not pay much attention to the estimation error of the key points in the pulse waveform. The estimation of human vascular conditions depends on the key points' positions of pulse wave. In this paper, we propose a Multi-Gaussian (MG) model to fit real pulse waveforms using an adaptive number (4 or 5 in our study) of Gaussian waves. The unknown parameters in the MG model are estimated by the Weighted Least Squares (WLS) method and the optimized weight values corresponding to different sampling points are selected by using the Multi-Criteria Decision Making (MCDM) method. Performance of the MG model and the WLS method has been evaluated by fitting 150 real pulse waveforms of five different types. The resulting Normalized Root Mean Square Error (NRMSE) was less than 2.0% and the estimation accuracy for the key points was satisfactory, demonstrating that our proposed method is effective in compressing, synthesizing and analyzing pulse waveforms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimation of Alpine Skier Posture Using Machine Learning Techniques
Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej
2014-01-01
High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492
Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard
Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton
2017-01-01
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385
Estimation of Melting Points of Organics.
Yalkowsky, Samuel H; Alantary, Doaa
2018-05-01
Unified physicochemical property estimation relationships is a system of empirical and theoretical relationships that relate 20 physicochemical properties of organic molecules to each other and to chemical structure. Melting point is a key parameter in the unified physicochemical property estimation relationships scheme because it is a determinant of several other properties including vapor pressure, and solubility. This review describes the first-principals calculation of the melting points of organic compounds from structure. The calculation is based on the fact that the melting point, T m , is equal to the ratio of the heat of melting, ΔH m , to the entropy of melting, ΔS m . The heat of melting is shown to be an additive constitutive property. However, the entropy of melting is not entirely group additive. It is primarily dependent on molecular geometry, including parameters which reflect the degree of restriction of molecular motion in the crystal to that of the liquid. Symmetry, eccentricity, chirality, flexibility, and hydrogen bonding, each affect molecular freedom in different ways and thus make different contributions to the total entropy of fusion. The relationships of these entropy determining parameters to chemical structure are used to develop a reasonably accurate means of predicting the melting points over 2000 compounds. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Roy, Kuntal
2017-11-01
There exists considerable confusion in estimating the spin diffusion length of materials with high spin-orbit coupling from spin pumping experiments. For designing functional devices, it is important to determine the spin diffusion length with sufficient accuracy from experimental results. An inaccurate estimation of spin diffusion length also affects the estimation of other parameters (e.g., spin mixing conductance, spin Hall angle) concomitantly. The spin diffusion length for platinum (Pt) has been reported in the literature in a wide range of 0.5-14 nm, and in particular it is a constant value independent of Pt's thickness. Here, the key reasonings behind such a wide range of reported values of spin diffusion length have been identified comprehensively. In particular, it is shown here that a thickness-dependent conductivity and spin diffusion length is necessary to simultaneously match the experimental results of effective spin mixing conductance and inverse spin Hall voltage due to spin pumping. Such a thickness-dependent spin diffusion length is tantamount to the Elliott-Yafet spin relaxation mechanism, which bodes well for transitional metals. This conclusion is not altered even when there is significant interfacial spin memory loss. Furthermore, the variations in the estimated parameters are also studied, which is important for technological applications.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
Theoretical Advances in Sequential Data Assimilation for the Atmosphere and Oceans
NASA Astrophysics Data System (ADS)
Ghil, M.
2007-05-01
We concentrate here on two aspects of advanced Kalman--filter-related methods: (i) the stability of the forecast- assimilation cycle, and (ii) parameter estimation for the coupled ocean-atmosphere system. The nonlinear stability of a prediction-assimilation system guarantees the uniqueness of the sequentially estimated solutions in the presence of partial and inaccurate observations, distributed in space and time; this stability is shown to be a necessary condition for the convergence of the state estimates to the true evolution of the turbulent flow. The stability properties of the governing nonlinear equations and of several data assimilation systems are studied by computing the spectrum of the associated Lyapunov exponents. These ideas are applied to a simple and an intermediate model of atmospheric variability and we show that the degree of stabilization depends on the type and distribution of the observations, as well as on the data assimilation method. These results represent joint work with A. Carrassi, A. Trevisan and F. Uboldi. Much is known by now about the main physical mechanisms that give rise to and modulate the El-Nino/Southern- Oscillation (ENSO), but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean-atmosphere model of ENSO. Model behavior is very sensitive to two key parameters: (a) "mu", the ocean-atmosphere coupling coefficient between the sea-surface temperature (SST) and wind stress anomalies; and (b) "delta-s", the surface-layer coefficient. Previous work has shown that "delta- s" determines the period of the model's self-sustained oscillation, while "mu' measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Assimilation of SST data from the NCEP- NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean-atmosphere GCMs will be discussed. These results arise from joint work with D. Kondrashov and C.-j. Sun.
Perdikaris, Paris; Karniadakis, George Em
2016-05-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).
Implementation of a numerical holding furnace model in foundry and construction of a reduced model
NASA Astrophysics Data System (ADS)
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2016-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.
Perdikaris, Paris; Karniadakis, George Em
2016-01-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481
Epizootiologic Parameters for Plague in Kazakhstan
Klassovskiy, Nikolay; Ageyev, Vladimir; Suleimenov, Bakhtiar; Atshabar, Bakhyt; Bennett, Malcolm
2006-01-01
Reliable estimates are lacking of key epizootiologic parameters for plague caused by Yersinia pestis infection in its natural reservoirs. We report results of a 3-year longitudinal study of plague dynamics in populations of a maintenance host, the great gerbil (Rhombomys opimus), in 2 populations in Kazakhstan. Serologic results suggest a mid-summer peak in the abundance of infectious hosts and possible transmission from the reservoir to humans. Decrease in antibody titer to an undetectable level showed no seasonal pattern. Our findings did not support the use of the nitroblue-tetrazolium test characterization of plague-infected hosts. Y. pestis infection reduced survival of otherwise asymptomatic hosts. PMID:16494753
Dong, Chunjiao; Clarke, David B; Yan, Xuedong; Khattak, Asad; Huang, Baoshan
2014-09-01
Crash data are collected through police reports and integrated with road inventory data for further analysis. Integrated police reports and inventory data yield correlated multivariate data for roadway entities (e.g., segments or intersections). Analysis of such data reveals important relationships that can help focus on high-risk situations and coming up with safety countermeasures. To understand relationships between crash frequencies and associated variables, while taking full advantage of the available data, multivariate random-parameters models are appropriate since they can simultaneously consider the correlation among the specific crash types and account for unobserved heterogeneity. However, a key issue that arises with correlated multivariate data is the number of crash-free samples increases, as crash counts have many categories. In this paper, we describe a multivariate random-parameters zero-inflated negative binomial (MRZINB) regression model for jointly modeling crash counts. The full Bayesian method is employed to estimate the model parameters. Crash frequencies at urban signalized intersections in Tennessee are analyzed. The paper investigates the performance of MZINB and MRZINB regression models in establishing the relationship between crash frequencies, pavement conditions, traffic factors, and geometric design features of roadway intersections. Compared to the MZINB model, the MRZINB model identifies additional statistically significant factors and provides better goodness of fit in developing the relationships. The empirical results show that MRZINB model possesses most of the desirable statistical properties in terms of its ability to accommodate unobserved heterogeneity and excess zero counts in correlated data. Notably, in the random-parameters MZINB model, the estimated parameters vary significantly across intersections for different crash types. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.
2011-03-01
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.
Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja
2014-01-01
Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less
NASA Astrophysics Data System (ADS)
Bezminabadi, Sina Norouzi; Ramezanzadeh, Ahmad; Esmaeil Jalali, Seyed-Mohammad; Tokhmechi, Behzad; Roustaei, Abbas
2017-03-01
Rate of penetration (ROP) is one of the key indicators of drilling operation performance. The estimation of ROP in drilling engineering is very important in terms of more accurate assessment of drilling time which affects operation costs. Hence, estimation of a ROP model using operational and environmental parameters is crucial. For this purpose, firstly physical and mechanical properties of rock were derived from well logs. Correlation between the pair data were determined to find influential parameters on ROP. A new ROP model has been developed in one of the Azadegan oil field wells in southwest of Iran. The model has been simulated using Multiple Nonlinear Regression (MNR) and Artificial Neural Network (ANN). By adding the rock properties, the estimation of the models were precisely improved. The results of simulation using MNR and ANN methods showed correlation coefficients of 0.62 and 0.87, respectively. It was concluded that the performance of ANN model in ROP prediction is fairly better than MNR method.
Worldwide Historical Estimates of Leaf Area Index, 1932-2000
NASA Technical Reports Server (NTRS)
Scurlock, J. M. O.; Asner, G. P.; Gower, S. T.
2001-01-01
Approximately 1000 published estimates of leaf area index (LAI) from nearly 400 unique field sites, covering the period 1932-2000, have been compiled into a single data set. LA1 is a key parameter for global and regional models of biosphere/atmosphere exchange of carbon dioxide, water vapor, and other materials. It also plays an integral role in determining the energy balance of the land surface. This data set provides a benchmark of typical values and ranges of LA1 for a variety of biomes and land cover types, in support of model development and validation of satellite-derived remote sensing estimates of LA1 and other vegetation parameters. The LA1 data are linked to a bibliography of over 300 originalsource references.This report documents the development of this data set, its contents, and its availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which were collected using a wide range of methodologies and assumptions that may not allow comparisons among sites.
Shared environmental influences on personality: A combined twin and adoption approach
Matteson, Lindsay K.; McGue, Matt; Iacono, William G.
2013-01-01
In the past, shared environmental influences on personality traits have been found to be negligible in behavior genetic studies (e.g., Bouchard & McGue, 2003). However, most studies have been based on biometrical modeling of twins only. Failure to meet key assumptions of the classical twin design could lead to biased estimates of shared environmental effects. Alternative approaches to the etiology of personality are needed. In the current study we estimated the impact of shared environmental factors on adolescent personality by simultaneously modeling both twin and adoption data. We found evidence for significant shared environmental influences on Multidimensional Personality Questionnaire (MPQ) Absorption (15% variance explained), Alienation (10%), Harm Avoidance (14%), and Traditionalism (26%) scales. Additionally, we found that in most cases biometrical models constraining parameter estimates to be equal across study type (twins versus adoptees) fit no worse than models allowing these parameters to vary; this suggests that results converge across study design despite the potential (sometimes opposite) biases of twin and adoption studies. Thus, we can be more confident that our findings represent the true contribution of shared environmental variance to personality development. PMID:24065564
Evaporation estimates from the Dead Sea and their implications on its water balance
NASA Astrophysics Data System (ADS)
Oroud, Ibrahim M.
2011-12-01
The Dead Sea (DS) is a terminal hypersaline water body situated in the deepest part of the Jordan Valley. There is a growing interest in linking the DS to the open seas due to severe water shortages in the area and the serious geological and environmental hazards to its vicinity caused by the rapid level drop of the DS. A key issue in linking the DS with the open seas would be an accurate determination of evaporation rates. There exist large uncertainties of evaporation estimates from the DS due to the complex feedback mechanisms between meteorological forcings and thermophysical properties of hypersaline solutions. Numerous methods have been used to estimate current and historical (pre-1960) evaporation rates, with estimates differing by ˜100%. Evaporation from the DS is usually deduced indirectly using energy, water balance, or pan methods with uncertainty in many parameters. Accumulated errors resulting from these uncertainties are usually pooled into the estimates of evaporation rates. In this paper, a physically based method with minimum empirical parameters is used to evaluate historical and current evaporation estimates from the DS. The more likely figures for historical and current evaporation rates from the DS were 1,500-1,600 and 1,200-1,250 mm per annum, respectively. Results obtained are congruent with field observations and with more elaborate procedures.
Baghdadi, Nicolas; Aubert, Maelle; Cerdan, Olivier; Franchistéguy, Laurent; Viel, Christian; Martin, Eric; Zribi, Mehrez; Desprats, Jean François
2007-01-01
Soil moisture is a key parameter in different environmental applications, such as hydrology and natural risk assessment. In this paper, surface soil moisture mapping was carried out over a basin in France using satellite synthetic aperture radar (SAR) images acquired in 2006 and 2007 by C-band (5.3 GHz) sensors. The comparison between soil moisture estimated from SAR data and in situ measurements shows good agreement, with a mapping accuracy better than 3%. This result shows that the monitoring of soil moisture from SAR images is possible in operational phase. Moreover, moistures simulated by the operational Météo-France ISBA soil-vegetation-atmosphere transfer model in the SIM-Safran-ISBA-Modcou chain were compared to radar moisture estimates to validate its pertinence. The difference between ISBA simulations and radar estimates fluctuates between 0.4 and 10% (RMSE). The comparison between ISBA and gravimetric measurements of the 12 March 2007 shows a RMSE of about 6%. Generally, these results are very encouraging. Results show also that the soil moisture estimated from SAR images is not correlated with the textural units defined in the European Soil Geographical Database (SGDBE) at 1:1000000 scale. However, dependence was observed between texture maps and ISBA moisture. This dependence is induced by the use of the texture map as an input parameter in the ISBA model. Even if this parameter is very important for soil moisture estimations, radar results shown that the textural map scale at 1:1000000 is not appropriate to differentiate moistures zones. PMID:28903238
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Effects of tag loss on direct estimates of population growth rate
Rotella, J.J.; Hines, J.E.
2005-01-01
The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.; Wolochow, H.
1973-01-01
One of the key parameters in estimation of the probability of contamintion of the outer planets (Jupiter, Saturn, Uranus, etc.) is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter appears to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer planets. Current work is directed at demonstration of aerial metabolism under near optimal conditions and tests of propagation in simulated Jovian atmospheres.
Studies on possible propagation of microbial contamination in planetary clouds
NASA Technical Reports Server (NTRS)
Dimmick, R. L.; Chatigny, M. A.
1973-01-01
Current U.S. planetary quarantine standards based on international agreements require consideration of the probability of contamination (Pc) of the outer planets, Venus, Jupiter, Saturn, etc. One of the key parameters in estimation of the Pc of these planets is the probability of growth (Pg) of terrestrial microorganisms on or near these planets. For example, Jupiter and Saturn appear to have an atmosphere in which some microbial species could metabolize and propagate. This study includes investigation of the likelihood of metabolism and propagation of microbes suspended in dynamic atmospheres. It is directed toward providing experimental information needed to aid in rational estimation of Pg for these outer plants.
Retrieval of effective cloud field parameters from radiometric data
NASA Astrophysics Data System (ADS)
Paulescu, Marius; Badescu, Viorel; Brabec, Marek
2017-06-01
Clouds play a key role in establishing the Earth's climate. Real cloud fields are very different and very complex in both morphological and microphysical senses. Consequently, the numerical description of the cloud field is a critical task for accurate climate modeling. This study explores the feasibility of retrieving the effective cloud field parameters (namely the cloud aspect ratio and cloud factor) from systematic radiometric measurements at high frequency (measurement is taken every 15 s). Two different procedures are proposed, evaluated, and discussed with respect to both physical and numerical restrictions. None of the procedures is classified as best; therefore, the specific advantages and weaknesses are discussed. It is shown that the relationship between the cloud shade and point cloudiness computed using the estimated cloud field parameters recovers the typical relationship derived from measurements.
Liang, Yuzhen; Kuo, Dave T F; Allen, Herbert E; Di Toro, Dominic M
2016-10-01
There is concern about the environmental fate and effects of munition constituents (MCs). Polyparameter linear free energy relationships (pp-LFERs) that employ Abraham solute parameters can aid in evaluating the risk of MCs to the environment. However, poor predictions using pp-LFERs and ABSOLV estimated Abraham solute parameters are found for some key physico-chemical properties. In this work, the Abraham solute parameters are determined using experimental partition coefficients in various solvent-water systems. The compounds investigated include hexahydro-1,3,5-trinitro-1,3,5-triazacyclohexane (RDX), octahydro-1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane (HMX), hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine (MNX), hexahydro-1,3,5-trinitroso-1,3,5-triazine (TNX), hexahydro-1,3-dinitroso-5- nitro-1,3,5-triazine (DNX), 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitrobenzene (TNB), and 4-nitroanisole. The solvents in the solvent-water systems are hexane, dichloromethane, trichloromethane, octanol, and toluene. The only available reported solvent-water partition coefficients are for octanol-water for some of the investigated compounds and they are in good agreement with the experimental measurements from this study. Solvent-water partition coefficients fitted using experimentally derived solute parameters from this study have significantly smaller root mean square errors (RMSE = 0.38) than predictions using ABSOLV estimated solute parameters (RMSE = 3.56) for the investigated compounds. Additionally, the predictions for various physico-chemical properties using the experimentally derived solute parameters agree with available literature reported values with prediction errors within 0.79 log units except for water solubility of RDX and HMX with errors of 1.48 and 2.16 log units respectively. However, predictions using ABSOLV estimated solute parameters have larger prediction errors of up to 7.68 log units. This large discrepancy is probably due to the missing R2NNO2 and R2NNO2 functional groups in the ABSOLV fragment database. Copyright © 2016. Published by Elsevier Ltd.
Threat evaluation for impact assessment in situation analysis systems
NASA Astrophysics Data System (ADS)
Roy, Jean; Paradis, Stephane; Allouche, Mohamad
2002-07-01
Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness, for the decision maker. Data fusion is a key enabler to meeting the demanding requirements of military situation analysis support systems. According to the data fusion model maintained by the Joint Directors of Laboratories' Data Fusion Group, impact assessment estimates the effects on situations of planned or estimated/predicted actions by the participants, including interactions between action plans of multiple players. In this framework, the appraisal of actual or potential threats is a necessary capability for impact assessment. This paper reviews and discusses in details the fundamental concepts of threat analysis. In particular, threat analysis generally attempts to compute some threat value, for the individual tracks, that estimates the degree of severity with which engagement events will potentially occur. Presenting relevant tracks to the decision maker in some threat list, sorted from the most threatening to the least, is clearly in-line with the cognitive demands associated with threat evaluation. A key parameter in many threat value evaluation techniques is the Closest Point of Approach (CPA). Along this line of thought, threatening tracks are often prioritized based upon which ones will reach their CPA first. Hence, the Time-to-CPA (TCPA), i.e., the time it will take for a track to reach its CPA, is also a key factor. Unfortunately, a typical assumption for the computation of the CPA/TCPA parameters is that the track velocity will remain constant. When a track is maneuvering, the CPA/TCPA values will change accordingly. These changes will in turn impact the threat value computations and, ultimately, the resulting threat list. This is clearly undesirable from a command decision-making perspective. In this regard, the paper briefly discusses threat value stabilization approaches based on neural networks and other mathematical techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A; Cole, Wesley J; Sun, Yinong
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less
An analysis of forest land use, forest land cover, and change at policy-relevant scales
John W. Coulston; Greg Reams; Dave N. Wear; C. Kenneth Brewer
2014-01-01
Quantifying the amount of forest and change in the amount of forest are key to ensure that appropriate management practices and policies are in place to maintain the array of ecosystem services provided by forests. There are a range of analytical techniques and data available to estimate these forest parameters, however, not all âforestâ is the same and various...
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.
2013-01-01
The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less
NASA Astrophysics Data System (ADS)
Rüegg, Andreas; Pilgram, Sebastian; Sigrist, Manfred
2008-06-01
We investigate the low-temperature electrical and thermal transport properties in atomically precise metallic heterostructures involving strongly correlated electron systems. The model of the Mott-insulator/band-insulator superlattice was discussed in the framework of the slave-boson mean-field approximation and transport quantities were derived by use of the Boltzmann transport equation in the relaxation-time approximation. The results for the optical conductivity are in good agreement with recently published experimental data on (LaTiO3)N/(SrTiO3)M superlattices and allow us to estimate the values of key parameters of the model. Furthermore, predictions for the thermoelectric response were made and the dependence of the Seebeck coefficient on model parameters was studied in detail. The width of the Mott-insulating material was identified as the most relevant parameter, in particular, this parameter provides a way to optimize the thermoelectric power factor at low temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Edward K.; Cornish, Neil J.
Massive black hole binaries are key targets for the space based gravitational wave Laser Interferometer Space Antenna (LISA). Several studies have investigated how LISA observations could be used to constrain the parameters of these systems. Until recently, most of these studies have ignored the higher harmonic corrections to the waveforms. Here we analyze the effects of the higher harmonics in more detail by performing extensive Monte Carlo simulations. We pay particular attention to how the higher harmonics impact parameter correlations, and show that the additional harmonics help mitigate the impact of having two laser links fail, by allowing for anmore » instantaneous measurement of the gravitational wave polarization with a single interferometer channel. By looking at parameter correlations we are able to explain why certain mass ratios provide dramatic improvements in certain parameter estimations, and illustrate how the improved polarization measurement improves the prospects for single interferometer operation.« less
MeProRisk - a Joint Venture for Minimizing Risk in Geothermal Reservoir Development
NASA Astrophysics Data System (ADS)
Clauser, C.; Marquart, G.
2009-12-01
Exploration and development of geothermal reservoirs for the generation of electric energy involves high engineering and economic risks due to the need for 3-D geophysical surface surveys and deep boreholes. The MeProRisk project provides a strategy guideline for reducing these risks by combining cross-disciplinary information from different specialists: Scientists from three German universities and two private companies contribute with new methods in seismic modeling and interpretation, numerical reservoir simulation, estimation of petrophysical parameters, and 3-D visualization. The approach chosen in MeProRisk consists in considering prospecting and developing of geothermal reservoirs as an iterative process. A first conceptual model for fluid flow and heat transport simulation can be developed based on limited available initial information on geology and rock properties. In the next step, additional data is incorporated which is based on (a) new seismic interpretation methods designed for delineating fracture systems, (b) statistical studies on large numbers of rock samples for estimating reliable rock parameters, (c) in situ estimates of the hydraulic conductivity tensor. This results in a continuous refinement of the reservoir model where inverse modelling of fluid flow and heat transport allows infering the uncertainty and resolution of the model at each iteration step. This finally yields a calibrated reservoir model which may be used to direct further exploration by optimizing additional borehole locations, estimate the uncertainty of key operational and economic parameters, and optimize the long-term operation of a geothermal resrvoir.
FRAGS: estimation of coding sequence substitution rates from fragmentary data
Swart, Estienne C; Hide, Winston A; Seoighe, Cathal
2004-01-01
Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802
Use of historical control data for assessing treatment effects in clinical trials.
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study's control arm. There is obvious appeal in using (i.e., 'borrowing') this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of 'dynamic' (versus 'static') borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. Copyright © 2013 John Wiley & Sons, Ltd.
Use of historical control data for assessing treatment effects in clinical trials
Viele, Kert; Berry, Scott; Neuenschwander, Beat; Amzal, Billy; Chen, Fang; Enas, Nathan; Hobbs, Brian; Ibrahim, Joseph G.; Kinnersley, Nelson; Lindborg, Stacy; Micallef, Sandrine; Roychoudhury, Satrajit; Thompson, Laura
2014-01-01
Clinical trials rarely, if ever, occur in a vacuum. Generally, large amounts of clinical data are available prior to the start of a study, particularly on the current study’s control arm. There is obvious appeal in using (i.e., ‘borrowing’) this information. With historical data providing information on the control arm, more trial resources can be devoted to the novel treatment while retaining accurate estimates of the current control arm parameters. This can result in more accurate point estimates, increased power, and reduced type I error in clinical trials, provided the historical information is sufficiently similar to the current control data. If this assumption of similarity is not satisfied, however, one can acquire increased mean square error of point estimates due to bias and either reduced power or increased type I error depending on the direction of the bias. In this manuscript, we review several methods for historical borrowing, illustrating how key parameters in each method affect borrowing behavior, and then, we compare these methods on the basis of mean square error, power and type I error. We emphasize two main themes. First, we discuss the idea of ‘dynamic’ (versus ‘static’) borrowing. Second, we emphasize the decision process involved in determining whether or not to include historical borrowing in terms of the perceived likelihood that the current control arm is sufficiently similar to the historical data. Our goal is to provide a clear review of the key issues involved in historical borrowing and provide a comparison of several methods useful for practitioners. PMID:23913901
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Beretta, Edoardo; Capasso, Vincenzo; Garao, Dario G
2018-06-01
In this paper a conceptual mathematical model of malaria transmission proposed in a previous paper has been analyzed in a deeper detail. Among its key epidemiological features of this model, two-age-classes (child and adult) and asymptomatic carriers have been included. The extra mortality of mosquitoes due to the use of long-lasting treated mosquito nets (LLINs) and Indoor Residual Spraying (IRS) has been included too. By taking advantage of the natural double time scale of the parasite and the human populations, it has been possible to provide interesting threshold results. In particular it has been shown that key parameters can be identified such that below a threshold level, built on these parameters, the epidemic tends to extinction, while above another threshold level it tends to a nontrivial endemic state, for which an interval estimate has been provided. Numerical simulations confirm the analytical results. Copyright © 2018 Elsevier Inc. All rights reserved.
Models for the Economics of Resilience
Gilbert, Stanley; Ayyub, Bilal M.
2016-01-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels. PMID:28133626
Models for the Economics of Resilience.
Gilbert, Stanley; Ayyub, Bilal M
2016-12-01
Estimating the economic burden of disasters requires appropriate models that account for key characteristics and decision making needs. Natural disasters in 2011 resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Average annual losses in the US amount to about $55 billion. Enhancing community and system resilience could lead to significant savings through risk reduction and expeditious recovery. The management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics with models for examining the economics of resilience. This paper provides such microeconomic models, compares them, examines their sensitivities to key parameters, and illustrates their uses. Such models enable improving the resiliency of systems to meet target levels.
Space shuttle propulsion estimation development verification
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The application of extended Kalman filtering to estimating the Space Shuttle Propulsion performance, i.e., specific impulse, from flight data in a post-flight processing computer program is detailed. The flight data used include inertial platform acceleration, SRB head pressure, SSME chamber pressure and flow rates, and ground based radar tracking data. The key feature in this application is the model used for the SRB's, which is a nominal or reference quasi-static internal ballistics model normalized to the propellant burn depth. Dynamic states of mass overboard and propellant burn depth are included in the filter model to account for real-time deviations from the reference model used. Aerodynamic, plume, wind and main engine uncertainties are also included for an integrated system model. Assuming uncertainty within the propulsion system model and attempts to estimate its deviations represent a new application of parameter estimation for rocket powered vehicles. Illustrations from the results of applying this estimation approach to several missions show good quality propulsion estimates.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
NASA Astrophysics Data System (ADS)
Qu, W.; Bogena, H. R.; Huisman, J. A.; Martinez, G.; Pachepsky, Y. A.; Vereecken, H.
2013-12-01
Soil water content is a key variable in the soil, vegetation and atmosphere continuum with high spatial and temporal variability. Temporal stability of soil water content (SWC) has been observed in multiple monitoring studies and the quantification of controls on soil moisture variability and temporal stability presents substantial interest. The objective of this work was to assess the effect of soil hydraulic parameters on the temporal stability. The inverse modeling based on large observed time series SWC with in-situ sensor network was used to estimate the van Genuchten-Mualem (VGM) soil hydraulic parameters in a small grassland catchment located in western Germany. For the inverse modeling, the shuffled complex evaluation (SCE) optimization algorithm was coupled with the HYDRUS 1D code. We considered two cases: without and with prior information about the correlation between VGM parameters. The temporal stability of observed SWC was well pronounced at all observation depths. Both the spatial variability of SWC and the robustness of temporal stability increased with depth. Calibrated models both with and without prior information provided reasonable correspondence between simulated and measured time series of SWC. Furthermore, we found a linear relationship between the mean relative difference (MRD) of SWC and the saturated SWC (θs). Also, the logarithm of saturated hydraulic conductivity (Ks), the VGM parameter n and logarithm of α were strongly correlated with the MRD of saturation degree for the prior information case, but no correlation was found for the non-prior information case except at the 50cm depth. Based on these results we propose that establishing relationships between temporal stability and spatial variability of soil properties presents a promising research avenue for a better understanding of the controls on soil moisture variability. Correlation between Mean Relative Difference of soil water content (or saturation degree) and inversely estimated soil hydraulic parameters (log10(Ks), log10(α), n, and θs) at 5-cm, 20-cm and 50-cm depths. Solid circles represent parameters estimated by using prior information; open circles represent parameters estimated without using prior information.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
NASA Astrophysics Data System (ADS)
Zhou, H.; Liu, W.; Ning, T.
2017-12-01
Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.
NASA Astrophysics Data System (ADS)
Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios
2016-06-01
Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.
Improving the representation of Arctic photosynthesis in Earth System Models
NASA Astrophysics Data System (ADS)
Rogers, A.; Serbin, S.; Sloan, V. L.; Norby, R. J.; Wullschleger, S. D.
2014-12-01
The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this models must accurately represent the terrestrial carbon cycle. Although Arctic carbon fluxes are small relative to global carbon fluxes, uncertainty is large. Photosynthetic CO2 uptake is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis and most ESMs use a derivation of the FvCB model to calculate gross primary productivity. Two key parameters required by the FvCB model are an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max) and the maximum rate of electron transport (Jmax). In ESMs the parameter Vc,max is typically fixed for a given plant functional type (PFT). Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max in these models relies on small data sets and unjustified assumptions. We examined the derivation of Vc,max and Jmax in current Arctic PFTs and estimated Vc,max and Jmax for a range of Arctic PFTs growing on the Barrow Environmental Observatory, Barrow, AK. We found that the values of Vc,max currently used to represent Arctic plants in ESMs are 70% lower than the values we measured, and contemporary temperature response functions for Vc,max also appear to underestimate Vc,max at low temperature. ESMs typically use a single multiplier (JVratio) to convert Vc,max to Jmax, however we found that the JVratio of Arctic plants is higher than current estimates suggesting that Arctic PFTs will be more responsive to rising carbon dioxide than currently projected. In addition we are exploring remotely sensed methods to scale up key biochemical (e.g. leaf N, leaf mass area) and physiological (e.g. Vc,max and Jmax) properties that drive model representation of photosynthesis in the Arctic. Our data suggest that the Arctic tundra has a much greater capacity for CO2 uptake, particularly at low temperature, and will be more CO2 responsive than is currently represented in ESMs. As we build robust relationships between physiology and spectral signatures we hope to provide spatially and temporally resolved trait maps of key model parameters that can be ingested by new model frameworks, or used to validate emergent model properties.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
2016-11-08
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.
We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2013-05-01
Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less
Parametric study of transport aircraft systems cost and weight
NASA Technical Reports Server (NTRS)
Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.
1977-01-01
The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.
On firework blasts and qualitative parameter dependency.
Zohdi, T I
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.
On firework blasts and qualitative parameter dependency
Zohdi, T. I.
2016-01-01
In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903
Melanoma Cell Colony Expansion Parameters Revealed by Approximate Bayesian Computation
Vo, Brenda N.; Drovandi, Christopher C.; Pettitt, Anthony N.; Pettet, Graeme J.
2015-01-01
In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ. PMID:26642072
Novel Estimation of Pilot Performance Characteristics
NASA Technical Reports Server (NTRS)
Bachelder, Edward N.; Aponso, Bimal
2017-01-01
Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.
Using diurnal temperature signals to infer vertical groundwater-surface water exchange
Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.
Assessment of catchments' flooding potential: a physically-based analytical tool
NASA Astrophysics Data System (ADS)
Botter, G.; Basso, S.; Schirmer, M.
2016-12-01
The assessment of the flooding potential of river catchments is critical in many research and applied fields, ranging from river science and geomorphology to urban planning and the insurance industry. Predicting magnitude and frequency of floods is key to prevent and mitigate the negative effects of high flows, and has therefore long been the focus of hydrologic research. Here, the recurrence intervals of seasonal flow maxima are estimated through a novel physically-based analytic approach, which links the extremal distribution of streamflows to the stochastic dynamics of daily discharge. An analytical expression of the seasonal flood-frequency curve is provided, whose parameters embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which expresses catchment saturation prior to rainfall events, needs to be calibrated on the observed maxima. The method has been tested in a set of catchments featuring heterogeneous daily flow regimes. The model is able to reproduce characteristic shapes of flood-frequency curves emerging in erratic and persistent flow regimes and provides good estimates of seasonal flow maxima in different climatic regions. Performances are steady when the magnitude of events with return times longer than the available sample size is estimated. This makes the approach especially valuable for regions affected by data scarcity.
Basset, Antoine; Bouthemy, Patrick; Boulanger, Jérôme; Waharte, François; Salamero, Jean; Kervrann, Charles
2017-07-24
Characterizing membrane dynamics is a key issue to understand cell exchanges with the extra-cellular medium. Total internal reflection fluorescence microscopy (TIRFM) is well suited to focus on the late steps of exocytosis at the plasma membrane. However, it is still a challenging task to quantify (lateral) diffusion and estimate local dynamics of proteins. A new model was introduced to represent the behavior of cargo transmembrane proteins during the vesicle fusion to the plasma membrane at the end of the exocytosis process. Two biophysical parameters, the diffusion coefficient and the release rate parameter, are automatically estimated from TIRFM image sequences, to account for both the lateral diffusion of molecules at the membrane and the continuous release of the proteins from the vesicle to the plasma membrane. Quantitative evaluation on 300 realistic computer-generated image sequences demonstrated the efficiency and accuracy of the method. The application of our method on 16 real TIRFM image sequences additionally revealed differences in the dynamic behavior of Transferrin Receptor (TfR) and Langerin proteins. An automated method has been designed to simultaneously estimate the diffusion coefficient and the release rate for each individual vesicle fusion event at the plasma membrane in TIRFM image sequences. It can be exploited for further deciphering cell membrane dynamics.
Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.
Analytical flow duration curves for summer streamflow in Switzerland
NASA Astrophysics Data System (ADS)
Santos, Ana Clara; Portela, Maria Manuela; Rinaldo, Andrea; Schaefli, Bettina
2018-04-01
This paper proposes a systematic assessment of the performance of an analytical modeling framework for streamflow probability distributions for a set of 25 Swiss catchments. These catchments show a wide range of hydroclimatic regimes, including namely snow-influenced streamflows. The model parameters are calculated from a spatially averaged gridded daily precipitation data set and from observed daily discharge time series, both in a forward estimation mode (direct parameter calculation from observed data) and in an inverse estimation mode (maximum likelihood estimation). The performance of the linear and the nonlinear model versions is assessed in terms of reproducing observed flow duration curves and their natural variability. Overall, the nonlinear model version outperforms the linear model for all regimes, but the linear model shows a notable performance increase with catchment elevation. More importantly, the obtained results demonstrate that the analytical model performs well for summer discharge for all analyzed streamflow regimes, ranging from rainfall-driven regimes with summer low flow to snow and glacier regimes with summer high flow. These results suggest that the model's encoding of discharge-generating events based on stochastic soil moisture dynamics is more flexible than previously thought. As shown in this paper, the presence of snowmelt or ice melt is accommodated by a relative increase in the discharge-generating frequency, a key parameter of the model. Explicit quantification of this frequency increase as a function of mean catchment meteorological conditions is left for future research.
On Patarin's Attack against the lIC Scheme
NASA Astrophysics Data System (ADS)
Ogura, Naoki; Uchiyama, Shigenori
In 2007, Ding et al. proposed an attractive scheme, which is called the l-Invertible Cycles (lIC) scheme. lIC is one of the most efficient multivariate public-key cryptosystems (MPKC); these schemes would be suitable for using under limited computational resources. In 2008, an efficient attack against lIC using Gröbner basis algorithms was proposed by Fouque et al. However, they only estimated the complexity of their attack based on their experimental results. On the other hand, Patarin had proposed an efficient attack against some multivariate public-key cryptosystems. We call this attack Patarin's attack. The complexity of Patarin's attack can be estimated by finding relations corresponding to each scheme. In this paper, we propose an another practical attack against the lIC encryption/signature scheme. We estimate the complexity of our attack (not experimentally) by adapting Patarin's attack. The attack can be also applied to the lIC- scheme. Moreover, we show some experimental results of a practical attack against the lIC/lIC- schemes. This is the first implementation of both our proposed attack and an attack based on Gröbner basis algorithm for the even case, that is, a parameter l is even.
Estimation of Soil Moisture from Optical and Thermal Remote Sensing: A Review
Zhang, Dianjun; Zhou, Guoqing
2016-01-01
As an important parameter in recent and numerous environmental studies, soil moisture (SM) influences the exchange of water and energy at the interface between the land surface and atmosphere. Accurate estimate of the spatio-temporal variations of SM is critical for numerous large-scale terrestrial studies. Although microwave remote sensing provides many algorithms to obtain SM at large scale, such as SMOS and SMAP etc., resulting in many data products, they are almost low resolution and not applicable in small catchment or field scale. Estimations of SM from optical and thermal remote sensing have been studied for many years and significant progress has been made. In contrast to previous reviews, this paper presents a new, comprehensive and systematic review of using optical and thermal remote sensing for estimating SM. The physical basis and status of the estimation methods are analyzed and summarized in detail. The most important and latest advances in soil moisture estimation using temporal information have been shown in this paper. SM estimation from optical and thermal remote sensing mainly depends on the relationship between SM and the surface reflectance or vegetation index. The thermal infrared remote sensing methods uses the relationship between SM and the surface temperature or variations of surface temperature/vegetation index. These approaches often have complex derivation processes and many approximations. Therefore, combinations of optical and thermal infrared remotely sensed data can provide more valuable information for SM estimation. Moreover, the advantages and weaknesses of different approaches are compared and applicable conditions as well as key issues in current soil moisture estimation algorithms are discussed. Finally, key problems and suggested solutions are proposed for future research. PMID:27548168
Estimation of Soil Moisture from Optical and Thermal Remote Sensing: A Review.
Zhang, Dianjun; Zhou, Guoqing
2016-08-17
As an important parameter in recent and numerous environmental studies, soil moisture (SM) influences the exchange of water and energy at the interface between the land surface and atmosphere. Accurate estimate of the spatio-temporal variations of SM is critical for numerous large-scale terrestrial studies. Although microwave remote sensing provides many algorithms to obtain SM at large scale, such as SMOS and SMAP etc., resulting in many data products, they are almost low resolution and not applicable in small catchment or field scale. Estimations of SM from optical and thermal remote sensing have been studied for many years and significant progress has been made. In contrast to previous reviews, this paper presents a new, comprehensive and systematic review of using optical and thermal remote sensing for estimating SM. The physical basis and status of the estimation methods are analyzed and summarized in detail. The most important and latest advances in soil moisture estimation using temporal information have been shown in this paper. SM estimation from optical and thermal remote sensing mainly depends on the relationship between SM and the surface reflectance or vegetation index. The thermal infrared remote sensing methods uses the relationship between SM and the surface temperature or variations of surface temperature/vegetation index. These approaches often have complex derivation processes and many approximations. Therefore, combinations of optical and thermal infrared remotely sensed data can provide more valuable information for SM estimation. Moreover, the advantages and weaknesses of different approaches are compared and applicable conditions as well as key issues in current soil moisture estimation algorithms are discussed. Finally, key problems and suggested solutions are proposed for future research.
A fuzzy rumor spreading model based on transmission capacity
NASA Astrophysics Data System (ADS)
Zhang, Yi; Xu, Jiuping; Wu, Yue
This paper proposes a rumor spreading model that considers three main factors: the event importance, event ambiguity, and the publics critical sense, each of which are defined by decision makers using linguistic descriptions and then transformed into triangular fuzzy numbers. To calculate the resultant force of these three factors, the transmission capacity and a new parameter category with fuzzy variables are determined. A rumor spreading model is then proposed which has fuzzy parameters rather than the fixed parameters in traditional models. As the proposed model considers the comprehensive factors affecting rumors from three aspects rather than examining special factors from a particular aspect. The proposed rumor spreading model is tested using different parameters for several different conditions on BA networks and three special cases are simulated. The simulation results for all three cases suggested that events of low importance, those that are only clarifying facts, and those that are strongly critical do not result in rumors. Therefore, the model assessment results were proven to be in agreement with reality. Parameters for the model were then determined and applied to an analysis of the 7.23 Yong-Wen line major transportation accident (YWMTA). When the simulated data were compared with the real data from this accident, the results demonstrated that the interval for the rumor spreading key point in the model was accurate, and that the key point for the YWMTA rumor spread fell into the range estimated by the model.
Trask, Amanda E; Bignal, Eric M; McCracken, Davy I; Piertney, Stuart B; Reid, Jane M
2017-09-01
A population's effective size (N e ) is a key parameter that shapes rates of inbreeding and loss of genetic diversity, thereby influencing evolutionary processes and population viability. However, estimating N e , and identifying key demographic mechanisms that underlie the N e to census population size (N) ratio, remains challenging, especially for small populations with overlapping generations and substantial environmental and demographic stochasticity and hence dynamic age-structure. A sophisticated demographic method of estimating N e /N, which uses Fisher's reproductive value to account for dynamic age-structure, has been formulated. However, this method requires detailed individual- and population-level data on sex- and age-specific reproduction and survival, and has rarely been implemented. Here, we use the reproductive value method and detailed demographic data to estimate N e /N for a small and apparently isolated red-billed chough (Pyrrhocorax pyrrhocorax) population of high conservation concern. We additionally calculated two single-sample molecular genetic estimates of N e to corroborate the demographic estimate and examine evidence for unobserved immigration and gene flow. The demographic estimate of N e /N was 0.21, reflecting a high total demographic variance (σ2dg) of 0.71. Females and males made similar overall contributions to σ2dg. However, contributions varied among sex-age classes, with greater contributions from 3 year-old females than males, but greater contributions from ≥5 year-old males than females. The demographic estimate of N e was ~30, suggesting that rates of increase of inbreeding and loss of genetic variation per generation will be relatively high. Molecular genetic estimates of N e computed from linkage disequilibrium and approximate Bayesian computation were approximately 50 and 30, respectively, providing no evidence of substantial unobserved immigration which could bias demographic estimates of N e . Our analyses identify key sex-age classes contributing to demographic variance and thus decreasing N e /N in a small age-structured population inhabiting a variable environment. They thereby demonstrate how assessments of N e can incorporate stochastic sex- and age-specific demography and elucidate key demographic processes affecting a population's evolutionary trajectory and viability. Furthermore, our analyses show that N e for the focal chough population is critically small, implying that management to re-establish genetic connectivity may be required to ensure population viability. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
Mad cows and computer models: the U.S. response to BSE.
Ackerman, Frank; Johnecheck, Wendy A
2008-01-01
The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.
Integration of manatee life-history data and population modeling
Eberhardt, L.L.; O'Shea, Thomas J.; O'Shea, Thomas J.; Ackerman, B.B.; Percival, H. Franklin
1995-01-01
Aerial counts and the number of deaths have been a major focus of attention in attempts to understand the population status of the Florida manatee (Trichechus manatus latirostris). Uncertainties associated with these data have made interpretation difficult. However, knowledge of manatee life-history attributes increased and now permits the development of a population model. We describe a provisional model based on the classical approach of Lotka. Parameters in the model are based on data from'other papers in this volume and draw primarily on observations from the Crystal River, Blue Spring, and Adantic Coast areas. The model estimates X (the finite rate ofincrease) at each study area, and application ofthe delta method provides estimates of variance components and partial derivatives ofX with respectto key input parameters (reproduction, adult survival, and early survival). In some study areas, only approximations of some parameters are available. Estimates of X and coefficients of variation (in parentheses) of manatees were 1.07 (0.009) in the Crystal River, 1.06 (0.012) at Blue Spring, and 1.01 (0.012) on the Atlantic Coast. Changing adult survival has a major effect on X. Early-age survival has the smallest effect. Bootstrap comparisons of population growth estimates from trend counts in the Crystal River and at Blue Spring and the reproduction and survival data suggest that the higher, observed rates from counts are probably not due to chance. Bootstrapping for variance estimates based on reproduction and survival data from manatees at Blue Spring and in the Crystal River provided estimates of X, adult survival, and rates of reproduction that were similar to those obtained by other methods. Our estimates are preliminary and suggestimprovements for future data collection and analysis. However, results support efforts to reduce mortality as the most effective means to promote the increased growth necessary for the eventual recovery of the Florida manatee population.
NASA Astrophysics Data System (ADS)
Skaugen, T.; Mengistu, Z.
2015-10-01
In this study we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady-state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff and hence estimated prior to calibration. Key principles guiding the evaluation of the new subsurface storage routine have been (a) to minimize the number of parameters to be estimated through the, often arbitrary fitting to optimize runoff predictions (calibration) and (b) maximize the range of testing conditions (i.e. large-sample hydrology). The new storage routine has been implemented in the already parameter parsimonious Distance Distribution Dynamics (DDD) model and tested for 73 catchments in Norway of varying size, mean elevations and landscape types. Runoff simulations for the 73 catchments from two model structures; DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage were compared. No loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe Efficiency criterion of 0.68 was found using the new estimated storage routine compared with 0.66 using calibrated storage routine. The average Kling-Gupta Efficiency criterion was 0.69 and 0.70 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recessions was reduced by almost 50 % using the new storage routine.
Reliability optimization design of the gear modification coefficient based on the meshing stiffness
NASA Astrophysics Data System (ADS)
Wang, Qianqian; Wang, Hui
2018-04-01
Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.
Uncertainty Quantification for CO2-Enhanced Oil Recovery
NASA Astrophysics Data System (ADS)
Dai, Z.; Middleton, R.; Bauman, J.; Viswanathan, H.; Fessenden-Rahn, J.; Pawar, R.; Lee, S.
2013-12-01
CO2-Enhanced Oil Recovery (EOR) is currently an option for permanently sequestering CO2 in oil reservoirs while increasing oil/gas productions economically. In this study we have developed a framework for understanding CO2 storage potential within an EOR-sequestration environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. By coupling a EOR tool--SENSOR (CEI, 2011) with a uncertainty quantification tool PSUADE (Tong, 2011), we conduct an integrated Monte Carlo simulation of water, oil/gas components and CO2 flow and reactive transport in the heterogeneous Morrow formation to identify the key controlling processes and optimal parameters for CO2 sequestration and EOR. A global sensitivity and response surface analysis are conducted with PSUADE to build numerically the relationship among CO2 injectivity, oil/gas production, reservoir parameters and distance between injection and production wells. The results indicate that the reservoir permeability and porosity are the key parameters to control the CO2 injection, oil and gas (CH4) recovery rates. The distance between the injection and production wells has large impact on oil and gas recovery and net CO2 injection rates. The CO2 injectivity increases with the increasing reservoir permeability and porosity. The distance between injection and production wells is the key parameter for designing an EOR pattern (such as a five (or nine)-spot pattern). The optimal distance for a five-spot-pattern EOR in this site is estimated from the response surface analysis to be around 400 meters. Next, we are building the machinery into our risk assessment framework CO2-PENS to utilize these response surfaces and evaluate the operation risk for CO2 sequestration and EOR at this site.
Retrieval of Aerosol Parameters from Continuous H24 Lidar-Ceilometer Measurements
NASA Astrophysics Data System (ADS)
Dionisi, D.; Barnaba, F.; Costabile, F.; Di Liberto, L.; Gobbi, G. P.; Wille, H.
2016-06-01
Ceilometer technology is increasingly applied to the monitoring and the characterization of tropospheric aerosols. In this work, a method to estimate some key aerosol parameters (extinction coefficient, surface area concentration and volume concentration) from ceilometer measurements is presented. A numerical model has been set up to derive a mean functional relationships between backscatter and the above mentioned parameters based on a large set of simulated aerosol optical properties. A good agreement was found between the modeled backscatter and extinction coefficients and the ones measured by the EARLINET Raman lidars. The developed methodology has then been applied to the measurements acquired by a prototype Polarization Lidar-Ceilometer (PLC). This PLC instrument was developed within the EC- LIFE+ project "DIAPASON" as an upgrade of the commercial, single-channel Jenoptik CHM15k system. The PLC run continuously (h24) close to Rome (Italy) for a whole year (2013-2014). Retrievals of the aerosol backscatter coefficient at 1064 nm and of the relevant aerosol properties were performed using the proposed methodology. This information, coupled to some key aerosol type identification made possible by the depolarization channel, allowed a year-round characterization of the aerosol field at this site. Examples are given to show how this technology coupled to appropriate data inversion methods is potentially useful in the operational monitoring of parameters of air quality and meteorological interest.
TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.
2016-01-01
Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804
Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R
2016-09-01
Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Changyong, Dou; Huadong, Guo; Chunming, Han; yuquan, Liu; Xijuan, Yue; Yinghui, Zhao
2014-03-01
Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc.
Ambient air pollution and semen quality.
Nobles, Carrie J; Schisterman, Enrique F; Ha, Sandie; Kim, Keewan; Mumford, Sunni L; Buck Louis, Germaine M; Chen, Zhen; Liu, Danping; Sherman, Seth; Mendola, Pauline
2018-05-01
Ambient air pollution is associated with systemic increases in oxidative stress, to which sperm are particularly sensitive. Although decrements in semen quality represent a key mechanism for impaired fecundability, prior research has not established a clear association between air pollution and semen quality. To address this, we evaluated the association between ambient air pollution and semen quality among men with moderate air pollution exposure. Of 501 couples in the LIFE study, 467 male partners provided one or more semen samples. Average residential exposure to criteria air pollutants and fine particle constituents in the 72 days before ejaculation was estimated using modified Community Multiscale Air Quality models. Generalized estimating equation models estimated the association between air pollutants and semen quality parameters (volume, count, percent hypo-osmotic swollen, motility, sperm head, morphology and sperm chromatin parameters). Models adjusted for age, body mass index, smoking and season. Most associations between air pollutants and semen parameters were small. However, associations were observed for an interquartile increase in fine particulates ≤2.5 µm and decreased sperm head size, including -0.22 (95% CI -0.34, -0.11) µm 2 for area, -0.06 (95% CI -0.09, -0.03) µm for length and -0.09 (95% CI -0.19, -0.06) µm for perimeter. Fine particulates were also associated with 1.03 (95% CI 0.40, 1.66) greater percent sperm head with acrosome. Air pollution exposure was not associated with semen quality, except for sperm head parameters. Moderate levels of ambient air pollution may not be a major contributor to semen quality. Published by Elsevier Inc.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values
Tong, Tiejun; Feng, Zeny; Hilton, Julia S.; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 − λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance. PMID:24078762
Estimating the Proportion of True Null Hypotheses Using the Pattern of Observed p-values.
Tong, Tiejun; Feng, Zeny; Hilton, Julia S; Zhao, Hongyu
2013-01-01
Estimating the proportion of true null hypotheses, π 0 , has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π 0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π 0 by incorporating the distribution pattern of the observed p -values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p -values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1 - λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.
Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests
NASA Technical Reports Server (NTRS)
Potter, C. S.; Peterson, David L. (Technical Monitor)
1997-01-01
Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.
Evaluating the impact of the HIV pandemic on measles control and elimination.
Helfand, Rita F; Moss, William J; Harpaz, Rafael; Scott, Susana; Cutts, Felicity
2005-05-01
To estimate the impact of the HIV pandemic on vaccine-acquired population immunity to measles virus because high levels of population immunity are required to eliminate transmission of measles virus in large geographical areas, and HIV infection can reduce the efficacy of measles vaccination. A literature review was conducted to estimate key parameters relating to the potential impact of HIV infection on the epidemiology of measles in sub-Saharan Africa; parameters included the prevalence of HIV, child mortality, perinatal HIV transmission rates and protective immune responses to measles vaccination. These parameter estimates were incorporated into a simple model, applicable to regions that have a high prevalence of HIV, to estimate the potential impact of HIV infection on population immunity against measles. The model suggests that the HIV pandemic should not introduce an insurmountable barrier to measles control and elimination, in part because higher rates of primary and secondary vaccine failure among HIV-infected children are counteracted by their high mortality rate. The HIV pandemic could result in a 2-3% increase in the proportion of the birth cohort susceptible to measles, and more frequent supplemental immunization activities (SIAs) may be necessary to control or eliminate measles. In the model the optimal interval between SIAs was most influenced by the coverage rate for routine measles vaccination. The absence of a second opportunity for vaccination resulted in the greatest increase in the number of susceptible children. These results help explain the initial success of measles elimination efforts in southern Africa, where measles control has been achieved in a setting of high HIV prevalence.
Zimmer, Christoph
2016-01-01
Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Carriquí, Marc; Douthe, Cyril; Molins, Arántzazu; Flexas, Jaume
2018-05-10
Mesophyll conductance to CO 2 (g m ), a key photosynthetic trait, is strongly constrained by leaf anatomy. Leaf anatomical parameters such as cell wall thickness and chloroplast area exposed to the mesophyll intercellular airspace have been demonstrated to determine g m in species with diverging phylogeny, leaf structure and ontogeny. However, the potential implication of leaf anatomy, especially chloroplast movement, on the short-term response of g m to rapid changes (i.e. seconds to minutes) under different environmental conditions (CO 2 , light or temperature) has not been examined. The aim of this study was to determine whether the observed rapid variations of g m in response to variations of light and CO 2 could be explained by changes in any leaf anatomical arrangements. When compared to high light and ambient CO 2 , the values of g m estimated by chlorophyll fluorescence decreased under high CO 2 and increased at low CO 2 , while it decreased with decreasing light. Nevertheless, no changes in anatomical parameters, including chloroplast distribution, were found. Hence, the g m estimated by analytical models based on anatomical parameters was constant under varying light and CO 2 . Considering this discrepancy between anatomy and chlorophyll fluorescence estimates, it is concluded that apparent fast g m variations should be due to artifacts in its estimation and/or to changes in the biochemical components acting on diffusional properties of the leaf (e.g. aquaporins and carbonic anhydrase). This article is protected by copyright. All rights reserved.
Assessment of type II diabetes mellitus using irregularly sampled measurements with missing data.
Barazandegan, Melissa; Ekram, Fatemeh; Kwok, Ezra; Gopaluni, Bhushan; Tulsyan, Aditya
2015-04-01
Diabetes mellitus is one of the leading diseases in the developed world. In order to better regulate blood glucose in a diabetic patient, improved modelling of insulin-glucose dynamics is a key factor in the treatment of diabetes mellitus. In the current work, the insulin-glucose dynamics in type II diabetes mellitus can be modelled by using a stochastic nonlinear state-space model. Estimating the parameters of such a model is difficult as only a few blood glucose and insulin measurements per day are available in a non-clinical setting. Therefore, developing a predictive model of the blood glucose of a person with type II diabetes mellitus is important when the glucose and insulin concentrations are only available at irregular intervals. To overcome these difficulties, we resort to online sequential Monte Carlo (SMC) estimation of states and parameters of the state-space model for type II diabetic patients under various levels of randomly missing clinical data. Our results show that this method is efficient in monitoring and estimating the dynamics of the peripheral glucose, insulin and incretins concentration when 10, 25 and 50% of the simulated clinical data were randomly removed.
One-sided measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Cao, Wen-Fei; Zhen, Yi-Zheng; Zheng, Yu-Lin; Li, Li; Chen, Zeng-Bing; Liu, Nai-Le; Chen, Kai
2018-01-01
Measurement-device-independent quantum key distribution (MDI-QKD) protocol was proposed to remove all the detector side channel attacks, while its security relies on the trusted encoding systems. Here we propose a one-sided MDI-QKD (1SMDI-QKD) protocol, which enjoys detection loophole-free advantage, and at the same time weakens the state preparation assumption in MDI-QKD. The 1SMDI-QKD can be regarded as a modified MDI-QKD, in which Bob's encoding system is trusted, while Alice's is uncharacterized. For the practical implementation, we also provide a scheme by utilizing coherent light source with an analytical two decoy state estimation method. Simulation with realistic experimental parameters shows that the protocol has a promising performance, and thus can be applied to practical QKD applications.
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
Stability and delay sensitivity of neutral fractional-delay systems.
Xu, Qi; Shi, Min; Wang, Zaihua
2016-08-01
This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.
Modeling the stock price returns volatility using GARCH(1,1) in some Indonesia stock prices
NASA Astrophysics Data System (ADS)
Awalludin, S. A.; Ulfah, S.; Soro, S.
2018-01-01
In the financial field, volatility is one of the key variables to make an appropriate decision. Moreover, modeling volatility is needed in derivative pricing, risk management, and portfolio management. For this reason, this study presented a widely used volatility model so-called GARCH(1,1) for estimating the volatility of daily returns of stock prices of Indonesia from July 2007 to September 2015. The returns can be obtained from stock price by differencing log of the price from one day to the next. Parameters of the model were estimated by Maximum Likelihood Estimation. After obtaining the volatility, natural cubic spline was employed to study the behaviour of the volatility over the period. The result shows that GARCH(1,1) indicate evidence of volatility clustering in the returns of some Indonesia stock prices.
Sensitivity of black carbon concentrations and climate impact to aging and scavenging in OsloCTM2-M7
NASA Astrophysics Data System (ADS)
Lund, Marianne T.; Berntsen, Terje K.; Samset, Bjørn H.
2017-05-01
Accurate representation of black carbon (BC) concentrations in climate models is a key prerequisite for understanding its net climate impact. BC aging and scavenging are treated very differently in current models. Here, we examine the sensitivity of three-dimensional (3-D), temporally resolved BC concentrations to perturbations to individual model processes in the chemistry transport model OsloCTM2-M7. The main goals are to identify processes related to aerosol aging and scavenging where additional observational constraints may most effectively improve model performance, in particular for BC vertical profiles, and to give an indication of how model uncertainties in the BC life cycle propagate into uncertainties in climate impacts. Coupling OsloCTM2 with the microphysical aerosol module M7 allows us to investigate aging processes in more detail than possible with a simpler bulk parameterization. Here we include, for the first time in this model, a treatment of condensation of nitric acid on BC. Using kernels, we also estimate the range of radiative forcing and global surface temperature responses that may result from perturbations to key tunable parameters in the model. We find that BC concentrations in OsloCTM2-M7 are particularly sensitive to convective scavenging and the inclusion of condensation by nitric acid. The largest changes are found at higher altitudes around the Equator and at low altitudes over the Arctic. Convective scavenging of hydrophobic BC, and the amount of sulfate required for BC aging, are found to be key parameters, potentially reducing bias against HIAPER Pole-to-Pole Observations (HIPPO) flight-based measurements by 60 to 90 %. Even for extensive tuning, however, the total impact on global-mean surface temperature is estimated to less than 0.04 K. Similar results are found when nitric acid is allowed to condense on the BC aerosols. We conclude, in line with previous studies, that a shorter atmospheric BC lifetime broadly improves the comparison with measurements over the Pacific. However, we also find that the model-measurement discrepancies can not be uniquely attributed to uncertainties in a single process or parameter. Model development therefore needs to be focused on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning of individual, effective parameters such as the global BC lifetime.
Jiao, Y.; Lapointe, N.W.R.; Angermeier, P.L.; Murphy, B.R.
2009-01-01
Models of species' demographic features are commonly used to understand population dynamics and inform management tactics. Hierarchical demographic models are ideal for the assessment of non-indigenous species because our knowledge of non-indigenous populations is usually limited, data on demographic traits often come from a species' native range, these traits vary among populations, and traits are likely to vary considerably over time as species adapt to new environments. Hierarchical models readily incorporate this spatiotemporal variation in species' demographic traits by representing demographic parameters as multi-level hierarchies. As is done for traditional non-hierarchical matrix models, sensitivity and elasticity analyses are used to evaluate the contributions of different life stages and parameters to estimates of population growth rate. We applied a hierarchical model to northern snakehead (Channa argus), a fish currently invading the eastern United States. We used a Monte Carlo approach to simulate uncertainties in the sensitivity and elasticity analyses and to project future population persistence under selected management tactics. We gathered key biological information on northern snakehead natural mortality, maturity and recruitment in its native Asian environment. We compared the model performance with and without hierarchy of parameters. Our results suggest that ignoring the hierarchy of parameters in demographic models may result in poor estimates of population size and growth and may lead to erroneous management advice. In our case, the hierarchy used multi-level distributions to simulate the heterogeneity of demographic parameters across different locations or situations. The probability that the northern snakehead population will increase and harm the native fauna is considerable. Our elasticity and prognostic analyses showed that intensive control efforts immediately prior to spawning and/or juvenile-dispersal periods would be more effective (and probably require less effort) than year-round control efforts. Our study demonstrates the importance of considering the hierarchy of parameters in estimating population growth rate and evaluating different management strategies for non-indigenous invasive species. ?? 2009 Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
NASA Astrophysics Data System (ADS)
Sampath, D. M. R.; Boski, T.
2018-05-01
Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.
Relating phylogenetic trees to transmission trees of infectious disease outbreaks.
Ypma, Rolf J F; van Ballegooijen, W Marijn; Wallinga, Jacco
2013-11-01
Transmission events are the fundamental building blocks of the dynamics of any infectious disease. Much about the epidemiology of a disease can be learned when these individual transmission events are known or can be estimated. Such estimations are difficult and generally feasible only when detailed epidemiological data are available. The genealogy estimated from genetic sequences of sampled pathogens is another rich source of information on transmission history. Optimal inference of transmission events calls for the combination of genetic data and epidemiological data into one joint analysis. A key difficulty is that the transmission tree, which describes the transmission events between infected hosts, differs from the phylogenetic tree, which describes the ancestral relationships between pathogens sampled from these hosts. The trees differ both in timing of the internal nodes and in topology. These differences become more pronounced when a higher fraction of infected hosts is sampled. We show how the phylogenetic tree of sampled pathogens is related to the transmission tree of an outbreak of an infectious disease, by the within-host dynamics of pathogens. We provide a statistical framework to infer key epidemiological and mutational parameters by simultaneously estimating the phylogenetic tree and the transmission tree. We test the approach using simulations and illustrate its use on an outbreak of foot-and-mouth disease. The approach unifies existing methods in the emerging field of phylodynamics with transmission tree reconstruction methods that are used in infectious disease epidemiology.
Quantifying the transmission potential of pandemic influenza
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Nishiura, Hiroshi
2008-03-01
This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.
NASA Technical Reports Server (NTRS)
Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.
2015-01-01
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
NASA Astrophysics Data System (ADS)
Cheng, Song; Zhang, Shengzhou; Zhang, Libo; Xia, Hongying; Peng, Jinhui; Wang, Shixing
2017-09-01
Eupatorium adenophorum, global exotic weeds, was utilized as feedstock for preparation of activated carbon (AC) via microwave-induced KOH activation. Influences of the three vital process parameters - microwave power, activation time and impregnation ratio (IR) - have been assessed on the adsorption capacity and yield of AC. The process parameters were optimized utilizing the Design Expert software and were identified to be a microwave power of 700 W, an activation time of 15 min and an IR of 4, with the resultant iodine adsorption number and yield being 2,621 mg/g and 28.25 %, respectively. The key parameters that characterize the AC such as the brunauer emmett teller (BET) surface area, total pore volume and average pore diameter were estimated to be 3,918 m2/g, 2,383 ml/g and 2.43 nm, respectively, under the optimized process conditions. The surface characteristics of AC were characterized by Fourier transform infrared spectroscopy, scanning electron microscope and Transmission electron microscope.
NASA Astrophysics Data System (ADS)
Panhwar, Sher Khan; Liu, Qun; Khan, Fozia; Siddiqui, Pirzada J. A.
2012-03-01
Using surplus production model packages of ASPIC (a stock-production model incorporating covariates) and CEDA (Catch effort data analysis), we analyzed the catch and effort data of Sillago sihama fishery in Pakistan. ASPIC estimates the parameters of MSY (maximum sustainable yield), F msy (fishing mortality), q (catchability coefficient), K (carrying capacity or unexploited biomass) and B1/K (maximum sustainable yield over initial biomass). The estimated non-bootstrapped value of MSY based on logistic was 598 t and that based on the Fox model was 415 t, which showed that the Fox model estimation was more conservative than that with the logistic model. The R 2 with the logistic model (0.702) is larger than that with the Fox model (0.541), which indicates a better fit. The coefficient of variation (cv) of the estimated MSY was about 0.3, except for a larger value 88.87 and a smaller value of 0.173. In contrast to the ASPIC results, the R 2 with the Fox model (0.651-0.692) was larger than that with the Schaefer model (0.435-0.567), indicating a better fit. The key parameters of CEDA are: MSY, K, q, and r (intrinsic growth), and the three error assumptions in using the models are normal, log normal and gamma. Parameter estimates from the Schaefer and Pella-Tomlinson models were similar. The MSY estimations from the above two models were 398 t, 549 t and 398 t for normal, log-normal and gamma error distributions, respectively. The MSY estimates from the Fox model were 381 t, 366 t and 366 t for the above three error assumptions, respectively. The Fox model estimates were smaller than those for the Schaefer and the Pella-Tomlinson models. In the light of the MSY estimations of 415 t from ASPIC for the Fox model and 381 t from CEDA for the Fox model, MSY for S. sihama is about 400 t. As the catch in 2003 was 401 t, we would suggest the fishery should be kept at the current level. Production models used here depend on the assumption that CPUE (catch per unit effort) data used in the study can reliably quantify temporal variability in population abundance, hence the modeling results would be wrong if such an assumption is not met. Because the reliability of this CPUE data in indexing fish population abundance is unknown, we should be cautious with the interpretation and use of the derived population and management parameters.
Assessment of the integrity of concrete bridge structures by acoustic emission technique
NASA Astrophysics Data System (ADS)
Yoon, Dong-Jin; Park, Philip; Jung, Juong-Chae; Lee, Seung-Seok
2002-06-01
This study was aimed at developing a new method for assessing the integrity of concrete structures. Especially acoustic emission technique was used in carrying out both laboratory experiment and field application. From the previous laboratory study, we confirmed that AE analysis provided a promising approach for estimating the level of damage and distress in concrete structures. The Felicity ratio, one of the key parameter for assessing damage, exhibits a favorable correlation with the overall damage level. The total number of AE events under stepwise cyclic loading also showed a good agreement with the damage level. In this study, a new suggested technique was applied to several concrete bridges in Korea in order to verify the applicability in field. The AE response was analyzed to obtain key parameters such as the total number and rate of AE events, AE parameter analysis for each event, and the characteristic features of the waveform as well as Felicity ratio analysis. Stepwise loading-unloading procedure for AE generation was introduced in field test by using each different weight of vehicle. According to the condition of bridge, for instance new or old bridge, AE event rate and AE generation behavior indicated many different aspects. The results showed that the suggested analyzing method would be a promising approach for assessing the integrity of concrete structures.
Linear elastic properties derivation from microstructures representative of transport parameters.
Hoang, Minh Tan; Bonnet, Guy; Tuan Luu, Hoang; Perrot, Camille
2014-06-01
It is shown that three-dimensional periodic unit cells (3D PUC) representative of transport parameters involved in the description of long wavelength acoustic wave propagation and dissipation through real foam samples may also be used as a standpoint to estimate their macroscopic linear elastic properties. Application of the model yields quantitative agreement between numerical homogenization results, available literature data, and experiments. Key contributions of this work include recognizing the importance of membranes and properties of the base material for the physics of elasticity. The results of this paper demonstrate that a 3D PUC may be used to understand and predict not only the sound absorbing properties of porous materials but also their transmission loss, which is critical for sound insulation problems.
Understanding which parameters control shallow ascent of silicic effusive magma
NASA Astrophysics Data System (ADS)
Thomas, Mark E.; Neuberg, Jurgen W.
2014-11-01
The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
NASA Astrophysics Data System (ADS)
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
Hayashi, Atsuko; Emanovsky, Paul D; Pietrusewsky, Michael; Holland, Thomas D
2016-03-01
Estimating stature from skeletonized remains is one of the essential parameters in the development of a biological profile. A new procedure for determining skeletal height (SKH) incorporating the vertical space height (VSH) from the anterior margin of the sacral promontory to the superior margins of the acetabulae for use in the anatomical method of stature estimation is introduced. Regression equations for stature estimation were generated from measurements of 38 American males of European ancestry from the William M. Bass Donated Skeletal Collection. The modification to the procedure results in a SKH that is highly correlated with stature (r = 0.925-0.948). Stature estimates have low standard errors of the estimate ranging from 21.79 to 25.95 mm, biases from to 0.50 to 0.94 mm, and accuracy rates from 17.71 mm to 19.45 mm. The procedure for determining the VSH, which replaces "S1 height" in traditional anatomical method models, is a key improvement to the method. © 2016 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Belica, L.; Petras, V.; Iiames, J. S., Jr.; Caldwell, P.; Mitasova, H.; Nelson, S. A. C.
2016-12-01
Water temperature is a key aspect of water quality and understanding how the thermal regimes of forested headwater streams may change in response to climatic and land cover changes is increasingly important to scientists and resource managers. In recent years, the forested mountain watersheds of the Southeastern U.S. have experienced changing climatic patterns as well as the loss of a keystone riparian tree species and anticipated hydrologic responses include lower summer stream flows and decreased stream shading. Solar radiation is the main source of thermal energy to streams and a key parameter in heat-budget models of stream temperature; a decrease in flow volume combined with a reduction in stream shading during summer have the potential to increase stream temperatures. The high spatial variability of forest canopies and the high spatio-temporal variability in sky conditions make estimating the solar radiation reaching small forested headwater streams difficult. The Subcanopy Solar Radiation Model (SSR) (Bode et al. 2014) is a GIS model that generates high resolution, spatially explicit estimates of solar radiation by incorporating topographic and vegetative shading with a light penetration index derived from leaf-on airborne LIDAR data. To evaluate the potential of the SSR model to provide estimates of stream insolation to parameterize heat-budget models, it was applied to the Coweeta Basin in the Southern Appalachians using airborne LIDAR (NCALM 2009, 1m resolution). The LIDAR derived canopy characteristics were compared to current hyperspectral images of the canopy for changes and the SSR estimates of solar radiation were compared with pyranometer measurements of solar radiation at several subcanopy sites during the summer of 2016. Preliminary results indicate the SSR model was effective in identifying variations in canopy density and light penetration, especially in areas associated with road and stream corridors and tree mortality. Current LIDAR data and more solar radiation measurements are needed to fully validate the accuracy of the SSR model in Southern Appalachian forests, but initial results suggest the high resolution, spatially explicit estimates of solar radiation can improve solar radiation parameter estimates in deterministic models of stream temperature in forested landscapes.
NASA Astrophysics Data System (ADS)
Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.
2016-04-01
Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
NASA Astrophysics Data System (ADS)
Arellano, A. F., Jr.; Tang, W.
2017-12-01
Assimilating observational data of chemical constituents into a modeling system is a powerful approach in assessing changes in atmospheric composition and estimating associated emissions. However, the results of such chemical data assimilation (DA) experiments are largely subject to various key factors such as: a) a priori information, b) error specification and representation, and c) structural biases in the modeling system. Here we investigate the sensitivity of an ensemble-based data assimilation state and emission estimates to these key factors. We focus on investigating the assimilation performance of the Community Earth System Model (CESM)/CAM-Chem with the Data Assimilation Research Testbed (DART) in representing biomass burning plumes in the Amazonia during the 2008 fire season. We conduct the following ensemble DA MOPITT CO experiments: 1) use of monthly-average NCAR's FINN surface fire emissionss, 2) use of daily FINN surface fire emissions, 3) use of daily FINN emissions with climatological injection heights, and 4) use of perturbed FINN emission parameters to represent not only the uncertainties in combustion activity but also in combustion efficiency. We show key diagnostics of assimilation performance for these experiments and verify with available ground-based and aircraft-based measurements.
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
NASA Technical Reports Server (NTRS)
Reichle, Rolf H.; De Lannoy, Gabrielle J. M.
2012-01-01
The Soil Moisture and Ocean Salinity (SMOS) satellite mission provides global measurements of L-band brightness temperatures at horizontal and vertical polarization and a variety of incidence angles that are sensitive to moisture and temperature conditions in the top few centimeters of the soil. These L-band observations can therefore be assimilated into a land surface model to obtain surface and root zone soil moisture estimates. As part of the observation operator, such an assimilation system requires a radiative transfer model (RTM) that converts geophysical fields (including soil moisture and soil temperature) into modeled L-band brightness temperatures. At the global scale, the RTM parameters and the climatological soil moisture conditions are still poorly known. Using look-up tables from the literature to estimate the RTM parameters usually results in modeled L-band brightness temperatures that are strongly biased against the SMOS observations, with biases varying regionally and seasonally. Such biases must be addressed within the land data assimilation system. In this presentation, the estimation of the RTM parameters is discussed for the NASA GEOS-5 land data assimilation system, which is based on the ensemble Kalman filter (EnKF) and the Catchment land surface model. In the GEOS-5 land data assimilation system, soil moisture and brightness temperature biases are addressed in three stages. First, the global soil properties and soil hydraulic parameters that are used in the Catchment model were revised to minimize the bias in the modeled soil moisture, as verified against available in situ soil moisture measurements. Second, key parameters of the "tau-omega" RTM were calibrated prior to data assimilation using an objective function that minimizes the climatological differences between the modeled L-band brightness temperatures and the corresponding SMOS observations. Calibrated parameters include soil roughness parameters, vegetation structure parameters, and the single scattering albedo. After this climatological calibration, the modeling system can provide L-band brightness temperatures with a global mean absolute bias of less than 10K against SMOS observations, across multiple incidence angles and for horizontal and vertical polarization. Third, seasonal and regional variations in the residual biases are addressed by estimating the vegetation optical depth through state augmentation during the assimilation of the L-band brightness temperatures. This strategy, tested here with SMOS data, is part of the baseline approach for the Level 4 Surface and Root Zone Soil Moisture data product from the planned Soil Moisture Active Passive (SMAP) satellite mission.
Using Diurnal Temperature Signals to Infer Vertical Groundwater-Surface Water Exchange.
Irvine, Dylan J; Briggs, Martin A; Lautz, Laura K; Gordon, Ryan P; McKenzie, Jeffrey M; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer. © 2016, National Ground Water Association.
Size-related bioconcentration kinetics of hydrophobic chemicals in fish
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sijm, D.T.H.M.; Linde, A. van der
1994-12-31
Uptake and elimination of hydrophobic chemicals by fish can be regarded as passive diffusive transport processes. Diffusion coefficients, lipid/water partitioning, diffusion pathlenghts, concentration gradients and surface exchange areas are key parameters describing this bioconcentration distribution process. In the present study two of these parameters were studied: the influence of lipid/water partitioning was studied by using hydrophobic chemicals of different hydrophobicity, and the surface exchange area by using different sizes of fish. By using one species of fish it was assumed that all other parameters were kept constant. Seven age classes of fish were exposed to a series of hydrophobic, formore » five days, which was followed by a deputation phase lasting up to 6 months. Bioconcentration parameters, such as uptake and elimination rate constants, and bioconcentration factors were determined. Uptake of the hydrophobic compounds was compared to that of oxygen. Uptake and elimination rates were compared to weight and estimated (gill) exchange areas. The role of weight and its implications for extrapolations of bioconcentration parameters to other species and sizes will be discussed.« less
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Margane, Armin; Geyer, Tobias; Sauter, Martin
2018-03-01
Artificial tracer experiments were conducted in the mature karst system of Jeita (Lebanon) under various flow conditions using surface and subsurface tracer injection points, to determine the variation of transport parameters (attenuation of peak concentration, velocity, transit times, dispersivity, and proportion of immobile and mobile regions) along fast and slow flow pathways. Tracer breakthrough curves (TBCs) observed at the karst spring were interpreted using a two-region nonequilibrium approach (2RNEM) to account for the skewness in the TBCs' long tailings. The conduit test results revealed a discharge threshold in the system dynamics, beyond which the transport parameters vary significantly. The polynomial relationship between transport velocity and discharge can be related to the variation of the conduit's cross-sectional area. Longitudinal dispersivity in the conduit system is not a constant value (α = 7-10 m) and decreases linearly with increasing flow rate because of dilution effects. Additionally, the proportion of immobile regions (arising from conduit irregularities) increases with decreasing water level in the conduit system. From tracer tests with injection at the surface, longitudinal dispersivity values are found to be large (8-27 m). The tailing observed in some TBCs is generated in the unsaturated zone before the tracer actually arrives at the major subsurface conduit draining the system. This work allows the estimation and prediction of the key transport parameters in karst aquifers. It shows that these parameters vary with time and flow dynamics, and they reflect the geometry of the flow pathway and the origin of infiltrating (potentially contaminated) recharge.
Monitoring population and environmental parameters of invasive mosquito species in Europe
2014-01-01
To enable a better understanding of the overwhelming alterations in the invasive mosquito species (IMS), methodical insight into the population and environmental factors that govern the IMS and pathogen adaptations are essential. There are numerous ways of estimating mosquito populations, and usually these describe developmental and life-history parameters. The key population parameters that should be considered during the surveillance of invasive mosquito species are: (1) population size and dynamics during the season, (2) longevity, (3) biting behaviour, and (4) dispersal capacity. Knowledge of these parameters coupled with vector competence may help to determine the vectorial capacity of IMS and basic disease reproduction number (R0) to support mosquito borne disease (MBD) risk assessment. Similarly, environmental factors include availability and type of larval breeding containers, climate change, environmental change, human population density, increased human travel and goods transport, changes in living, agricultural and farming habits (e.g. land use), and reduction of resources in the life cycle of mosquitoes by interventions (e.g. source reduction of aquatic habitats). Human population distributions, urbanisation, and human population movement are the key behavioural factors in most IMS-transmitted diseases. Anthropogenic issues are related to the global spread of MBD such as the introduction, reintroduction, circulation of IMS and increased exposure to humans from infected mosquito bites. This review addresses the population and environmental factors underlying the growing changes in IMS populations in Europe and confers the parameters selected by criteria of their applicability. In addition, overview of the commonly used and newly developed tools for their monitoring is provided. PMID:24739334
Fitting a Two-Component Scattering Model to Polarimetric SAR Data from Forests
NASA Technical Reports Server (NTRS)
Freeman, Anthony
2007-01-01
Two simple scattering mechanisms are fitted to polarimetric synthetic aperture radar (SAR) observations of forests. The mechanisms are canopy scatter from a reciprocal medium with azimuthal symmetry and a ground scatter term that can represent double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants or Bragg scatter from a moderately rough surface, which is seen through a layer of vertically oriented scatterers. The model is shown to represent the behavior of polarimetric backscatter from a tropical forest and two temperate forest sites by applying it to data from the National Aeronautic and Space Agency/Jet Propulsion Laboratory's Airborne SAR (AIRSAR) system. Scattering contributions from the two basic scattering mechanisms are estimated for clusters of pixels in polarimetric SAR images. The solution involves the estimation of four parameters from four separate equations. This model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem. The model is used to develop an understanding of the ground-trunk double-bounce scattering that is present in the data, which is seen to vary considerably as a function of incidence angle. Two parameters in the model fit appear to exhibit sensitivity to vegetation canopy structure, which is worth further exploration. Results from the model fit for the ground scattering term are compared with estimates from a forward model and shown to be in good agreement. The behavior of the scattering from the ground-trunk interaction is consistent with the presence of a pseudo-Brewster angle effect for the air-trunk scattering interface. If the Brewster angle is known, it is possible to directly estimate the real part of the dielectric constant of the trunks, a key variable in forward modeling of backscatter from forests. It is also shown how, with a priori knowledge of the forest height, an estimate for the attenuation coefficient of the canopy can be obtained directly from the multi-incidence-angle polarimetric observations. This attenuation coefficient is another key variable in forward models and is generally related to the canopy density.
NASA Astrophysics Data System (ADS)
Hou, W. Z.; Li, Z. Q.; Zheng, F. X.; Qie, L. L.
2018-04-01
This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM) with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.
Trilogy, a Planetary Geodesy Mission Concept for Measuring the Expansion of the Solar System.
Smith, David E; Zuber, Maria T; Mazarico, Erwan; Genova, Antonio; Neumann, Gregory A; Sun, Xiaoli; Torrence, Mark H; Mao, Dan-Dan
2018-04-01
The scale of the solar system is slowly changing, likely increasing as a result of solar mass loss, with additional change possible if there is a secular variation of the gravitational constant, G . The measurement of the change of scale could provide insight into the past and the future of the solar system, and in addition a better understanding of planetary motion and fundamental physics. Estimates for the expansion of the scale of the solar system are of order 1.5 cm year -1 AU -1 , which over several years is an observable quantity with present-day laser ranging systems. This estimate suggests that laser measurements between planets could provide an accurate estimate of the solar system expansion rate. We examine distance measurements between three bodies in the inner solar system -- Earth's Moon, Mars and Venus -- and outline a mission concept for making the measurements. The concept involves placing spacecraft that carry laser ranging transponders in orbit around each body and measuring the distances between the three spacecraft over a period of several years. The analysis of these range measurements would allow the co-estimation of the spacecraft orbit, planetary ephemerides, other geophysical parameters related to the constitution and dynamics of the central bodies, and key geodetic parameters related to the solar system expansion, the Sun, and theoretical physics.
Spatial dynamics of the 1918 influenza pandemic in England, Wales and the United States.
Eggo, Rosalind M; Cauchemez, Simon; Ferguson, Neil M
2011-02-06
There is still limited understanding of key determinants of spatial spread of influenza. The 1918 pandemic provides an opportunity to elucidate spatial determinants of spread on a large scale. To better characterize the spread of the 1918 major wave, we fitted a range of city-to-city transmission models to mortality data collected for 246 population centres in England and Wales and 47 cities in the US. Using a gravity model for city-to-city contacts, we explored the effect of population size and distance on the spread of disease and tested assumptions regarding density dependence in connectivity between cities. We employed Bayesian Markov Chain Monte Carlo methods to estimate parameters of the model for population, infectivity, distance and density dependence. We inferred the most likely transmission trees for both countries. For England and Wales, a model that estimated the degree of density dependence in connectivity between cities was preferable by deviance information criterion comparison. Early in the major wave, long distance infective interactions predominated, with local infection events more likely as the epidemic became widespread. For the US, with fewer more widely dispersed cities, statistical power was lacking to estimate population size dependence or the degree of density dependence, with the preferred model depending on distance only. We find that parameters estimated from the England and Wales dataset can be applied to the US data with no likelihood penalty.
Spatial dynamics of the 1918 influenza pandemic in England, Wales and the United States
Eggo, Rosalind M.; Cauchemez, Simon; Ferguson, Neil M.
2011-01-01
There is still limited understanding of key determinants of spatial spread of influenza. The 1918 pandemic provides an opportunity to elucidate spatial determinants of spread on a large scale. To better characterize the spread of the 1918 major wave, we fitted a range of city-to-city transmission models to mortality data collected for 246 population centres in England and Wales and 47 cities in the US. Using a gravity model for city-to-city contacts, we explored the effect of population size and distance on the spread of disease and tested assumptions regarding density dependence in connectivity between cities. We employed Bayesian Markov Chain Monte Carlo methods to estimate parameters of the model for population, infectivity, distance and density dependence. We inferred the most likely transmission trees for both countries. For England and Wales, a model that estimated the degree of density dependence in connectivity between cities was preferable by deviance information criterion comparison. Early in the major wave, long distance infective interactions predominated, with local infection events more likely as the epidemic became widespread. For the US, with fewer more widely dispersed cities, statistical power was lacking to estimate population size dependence or the degree of density dependence, with the preferred model depending on distance only. We find that parameters estimated from the England and Wales dataset can be applied to the US data with no likelihood penalty. PMID:20573630
Trilogy, a planetary geodesy mission concept for measuring the expansion of the solar system
NASA Astrophysics Data System (ADS)
Smith, David E.; Zuber, Maria T.; Mazarico, Erwan; Genova, Antonio; Neumann, Gregory A.; Sun, Xiaoli; Torrence, Mark H.; Mao, Dan-dan
2018-04-01
The scale of the solar system is slowly changing, likely increasing as a result of solar mass loss, with additional change possible if there is a secular variation of the gravitational constant, G. The measurement of the change of scale could provide insight into the past and the future of the solar system, and in addition a better understanding of planetary motion and fundamental physics. Estimates for the expansion of the scale of the solar system are of order 1.5 cm year-1 AU-1, which over several years is an observable quantity with present-day laser ranging systems. This estimate suggests that laser measurements between planets could provide an accurate estimate of the solar system expansion rate. We examine distance measurements between three bodies in the inner solar system - Earth's Moon, Mars and Venus - and outline a mission concept for making the measurements. The concept involves placing spacecraft that carry laser ranging transponders in orbit around each body and measuring the distances between the three spacecraft over a period of several years. The analysis of these range measurements would allow the co-estimation of the spacecraft orbit, planetary ephemerides, other geophysical parameters related to the constitution and dynamics of the central bodies, and key geodetic parameters related to the solar system expansion, the Sun, and theoretical physics.
Sonka, Milan; Abramoff, Michael D.
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR. PMID:24222760
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
NASA Astrophysics Data System (ADS)
Rigden, Angela J.; Salvucci, Guido D.
2015-04-01
A novel method of estimating evapotranspiration (ET), referred to as the ETRHEQ method, is further developed, validated, and applied across the U.S. from 1961 to 2010. The ETRHEQ method estimates the surface conductance to water vapor transport, which is the key rate-limiting parameter of typical ET models, by choosing the surface conductance that minimizes the vertical variance of the calculated relative humidity profile averaged over the day. The ETRHEQ method, which was previously tested at five AmeriFlux sites, is modified for use at common weather stations and further validated at 20 AmeriFlux sites that span a wide range of climates and limiting factors. Averaged across all sites, the daily latent heat flux RMSE is ˜26 W·m-2 (or 15%). The method is applied across the U.S. at 305 weather stations and spatially interpolated using ANUSPLIN software. Gridded annual mean ETRHEQ ET estimates are compared with four data sets, including water balance-derived ET, machine-learning ET estimates based on FLUXNET data, North American Land Data Assimilation System project phase 2 ET, and a benchmark product that integrates 14 global ET data sets, with RMSEs ranging from 8.7 to 12.5 cm·yr-1. The ETRHEQ method relies only on data measured at weather stations, an estimate of vegetation height derived from land cover maps, and an estimate of soil thermal inertia. These data requirements allow it to have greater spatial coverage than direct measurements, greater historical coverage than satellite methods, significantly less parameter specification than most land surface models, and no requirement for calibration.
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
Can we reliably estimate managed forest carbon dynamics using remotely sensed data?
NASA Astrophysics Data System (ADS)
Smallman, Thomas Luke; Exbrayat, Jean-Francois; Bloom, A. Anthony; Williams, Mathew
2015-04-01
Forests are an important part of the global carbon cycle, serving as both a large store of carbon and currently as a net sink of CO2. Forest biomass varies significantly in time and space, linked to climate, soils, natural disturbance and human impacts. This variation means that the global distribution of forest biomass and their dynamics are poorly quantified. Terrestrial ecosystem models (TEMs) are rarely evaluated for their predictions of forest carbon stocks and dynamics, due to a lack of knowledge on site specific factors such as disturbance dates and / or managed interventions. In this regard, managed forests present a valuable opportunity for model calibration and improvement. Spatially explicit datasets of planting dates, species and yield classification, in combination with remote sensing data and an appropriate data assimilation (DA) framework can reduce prediction uncertainty and error. We use a Baysian approach to calibrate the data assimilation linked ecosystem carbon (DALEC) model using a Metropolis Hastings-Markov Chain Monte Carlo (MH-MCMC) framework. Forest management information is incorporated into the data assimilation framework as part of ecological and dynamic constraints (EDCs). The key advantage here is that DALEC simulates a full carbon balance, not just the living biomass, and that both parameter and prediction uncertainties are estimated as part of the DA analysis. DALEC has been calibrated at two managed forests, in the USA (Pinus taeda; Duke Forest) and UK (Picea sitchensis; Griffin Forest). At each site DALEC is calibrated twice (exp1 & exp2). Both calibrations (exp1 & exp2) assimilated MODIS LAI and HWSD estimates of soil carbon stored in soil organic matter, in addition to common management information and prior knowledge included in parameter priors and the EDCs. Calibration exp1 also utilises multiple site level estimates of carbon storage in multiple pools. By comparing simulations we determine the impact of site-level observations on uncertainty and error on predictions, and which observations are key to constraining ecosystem processes. Preliminary simulations indicate that DALEC calibration exp1 accurately simulated the assimilated observations for forest and soil carbon stock estimates including, critically for forestry, standing wood stocks (R2 = 0.92, bias = -4.46 MgC ha-1, RMSE = 5.80 MgC ha-1). The results from exp1 indicate the model is able to find parameters that are both consistent with EDC and observations. In the absence of site-level stock observations (exp2) DALEC accurately estimates foliage and fine root pools, while the median estimate of above ground litter and wood stocks (R2 = 0.92, bias = -48.30 MgC ha-1, RMSE = 50.30 MgC ha-1) are over- and underestimated respectively, site-level observations are within model uncertainty. These results indicate that we can estimate managed forests dynamics using remotely sensed data, particularly as remotely sensed above ground biomass maps become available to provide constraint to correct biases in woody accumulation.
Anticipating abrupt shifts in temporal evolution of probability of eruption
NASA Astrophysics Data System (ADS)
Rohmer, J.; Loschetter, A.
2016-04-01
Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.
Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0-2500 s/mm2 with one number of excitations [NEX]) and five b-values (0-2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions.
Kataoka, Masako; Kanao, Shotaro; Kawai, Makiko; Onishi, Natsuko; Koyasu, Sho; Murata, Katsutoshi; Ohashi, Akane; Sakaguchi, Rena; Togashi, Kaori
2018-01-01
We prospectively examined the variability of non-Gaussian diffusion magnetic resonance imaging (MRI) and intravoxel incoherent motion (IVIM) measurements with different numbers of b-values and excitations in normal breast tissue and breast lesions. Thirteen volunteers and fourteen patients with breast lesions (seven malignant, eight benign; one patient had bilateral lesions) were recruited in this prospective study (approved by the Internal Review Board). Diffusion-weighted MRI was performed with 16 b-values (0–2500 s/mm2 with one number of excitations [NEX]) and five b-values (0–2500 s/mm2, 3 NEX), using a 3T breast MRI. Intravoxel incoherent motion (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm2 [ADC0] and kurtosis [K]) parameters were estimated from IVIM and Kurtosis models using 16 b-values, and synthetic apparent diffusion coefficient (sADC) values were obtained from two key b-values. The variabilities between and within subjects and between different diffusion acquisition methods were estimated. There were no statistical differences in ADC0, K, or sADC values between the different b-values or NEX. A good agreement of diffusion parameters was observed between 16 b-values (one NEX), five b-values (one NEX), and five b-values (three NEX) in normal breast tissue or breast lesions. Insufficient agreement was observed for IVIM parameters. There were no statistical differences in the non-Gaussian diffusion MRI estimated values obtained from a different number of b-values or excitations in normal breast tissue or breast lesions. These data suggest that a limited MRI protocol using a few b-values might be relevant in a clinical setting for the estimation of non-Gaussian diffusion MRI parameters in normal breast tissue and breast lesions. PMID:29494639
NASA Astrophysics Data System (ADS)
Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.
2012-12-01
Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.
An expert system for diagnostics and estimation of steam turbine components condition
NASA Astrophysics Data System (ADS)
Murmansky, B. E.; Aronson, K. E.; Brodov, Yu. M.
2017-11-01
The report describes an expert system of probability type for diagnostics and state estimation of steam turbine technological subsystems components. The expert system is based on Bayes’ theorem and permits to troubleshoot the equipment components, using expert experience, when there is a lack of baseline information on the indicators of turbine operation. Within a unified approach the expert system solves the problems of diagnosing the flow steam path of the turbine, bearings, thermal expansion system, regulatory system, condensing unit, the systems of regenerative feed-water and hot water heating. The knowledge base of the expert system for turbine unit rotors and bearings contains a description of 34 defects and of 104 related diagnostic features that cause a change in its vibration state. The knowledge base for the condensing unit contains 12 hypotheses and 15 evidence (indications); the procedures are also designated for 20 state parameters estimation. Similar knowledge base containing the diagnostic features and faults hypotheses are formulated for other technological subsystems of turbine unit. With the necessary initial information available a number of problems can be solved within the expert system for various technological subsystems of steam turbine unit: for steam flow path it is the correlation and regression analysis of multifactor relationship between the vibration parameters variations and the regime parameters; for system of thermal expansions it is the evaluation of force acting on the longitudinal keys depending on the temperature state of the turbine cylinder; for condensing unit it is the evaluation of separate effect of the heat exchange surface contamination and of the presence of air in condenser steam space on condenser thermal efficiency performance, as well as the evaluation of term for condenser cleaning and for tube system replacement and so forth. With a lack of initial information the expert system enables to formulate a diagnosis, calculating the probability of faults hypotheses, given the degree of the expert confidence in estimation of turbine components operation parameters.
Ekwunife, Obinna I; Lhachimi, Stefan K
2017-12-08
World Health Organisation recommends routine Human Papilloma Virus (HPV) vaccination for girls when its cost-effectiveness in the country or region has been duly considered. We therefore aimed to evaluate cost-effectiveness of HPV vaccination in Nigeria using pragmatic parameter estimates for cost and programme coverage, i.e. realistically achievable in the studied context. A microsimulation frame-work was used. The natural history for cervical cancer disease was remodelled from a previous Nigerian model-based study. Costing was based on health providers' perspective. Disability adjusted life years attributable to cervical cancer mortality served as benefit estimate. Suitable policy option was obtained by calculating the incremental costs-effectiveness ratio. Probabilistic sensitivity analysis was used to assess parameter uncertainty. One-way sensitivity analysis was used to explore the robustness of the policy recommendation to key parameters alteration. Expected value of perfect information (EVPI) was calculated to determine the expected opportunity cost associated with choosing the optimal scenario or strategy at the maximum cost-effectiveness threshold. Combination of the current scenario of opportunistic screening and national HPV vaccination programme (CS + NV) was the only cost-effective and robust policy option. However, CS + NV scenario was only cost-effective so far the unit cost of HPV vaccine did not exceed $5. EVPI analysis showed that it may be worthwhile to conduct additional research to inform the decision to adopt CS + NV. National HPV vaccination combined with opportunist cervical cancer screening is cost-effective in Nigeria. However, adoption of this strategy should depend on its relative efficiency when compared to other competing new vaccines and health interventions.
Coudeville, Laurent; Baurin, Nicolas; Vergu, Elisabeta
2016-12-07
A tetravalent dengue vaccine was shown to be efficacious against symptomatic dengue in two phase III efficacy studies performed in five Asian and five Latin American countries. The objective here was to estimate key parameters of a dengue transmission model using the data collected during these studies. Parameter estimation was based on a Sequential Monte Carlo approach and used a cohort version of the transmission model. Serotype-specific basic reproduction numbers were derived for each country. Parameters related to serotype interactions included duration of cross-protection and level of cross-enhancement characterized by differences in symptomaticity for primary, secondary and post-secondary infections. We tested several vaccine efficacy profiles and simulated the evolution of vaccine efficacy over time for the scenarios providing the best fit to the data. Two reference scenarios were identified. The first included temporary cross-protection and the second combined cross-protection and cross-enhancement upon wild-type infection and following vaccination. Both scenarios were associated with differences in efficacy by serotype, higher efficacy for pre-exposed subjects and against severe dengue, increase in efficacy with doses for naïve subjects and by a more important waning of vaccine protection for subjects when naïve than when pre-exposed. Over 20 years, the median reduction of dengue risk induced by the direct protection conferred by the vaccine ranged from 24% to 47% according to country for the first scenario and from 34% to 54% for the second. Our study is an important first step in deriving a general framework that combines disease dynamics and mechanisms of vaccine protection that could be used to assess the impact of vaccination at a population level. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III
2015-12-01
Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.
BAYESIAN PROTEIN STRUCTURE ALIGNMENT.
Rodriguez, Abel; Schmidler, Scott C
The analysis of the three-dimensional structure of proteins is an important topic in molecular biochemistry. Structure plays a critical role in defining the function of proteins and is more strongly conserved than amino acid sequence over evolutionary timescales. A key challenge is the identification and evaluation of structural similarity between proteins; such analysis can aid in understanding the role of newly discovered proteins and help elucidate evolutionary relationships between organisms. Computational biologists have developed many clever algorithmic techniques for comparing protein structures, however, all are based on heuristic optimization criteria, making statistical interpretation somewhat difficult. Here we present a fully probabilistic framework for pairwise structural alignment of proteins. Our approach has several advantages, including the ability to capture alignment uncertainty and to estimate key "gap" parameters which critically affect the quality of the alignment. We show that several existing alignment methods arise as maximum a posteriori estimates under specific choices of prior distributions and error models. Our probabilistic framework is also easily extended to incorporate additional information, which we demonstrate by including primary sequence information to generate simultaneous sequence-structure alignments that can resolve ambiguities obtained using structure alone. This combined model also provides a natural approach for the difficult task of estimating evolutionary distance based on structural alignments. The model is illustrated by comparison with well-established methods on several challenging protein alignment examples.
Crowd density estimation based on convolutional neural networks with mixed pooling
NASA Astrophysics Data System (ADS)
Zhang, Li; Zheng, Hong; Zhang, Ying; Zhang, Dongming
2017-09-01
Crowd density estimation is an important topic in the fields of machine learning and video surveillance. Existing methods do not provide satisfactory classification accuracy; moreover, they have difficulty in adapting to complex scenes. Therefore, we propose a method based on convolutional neural networks (CNNs). The proposed method improves performance of crowd density estimation in two key ways. First, we propose a feature pooling method named mixed pooling to regularize the CNNs. It replaces deterministic pooling operations with a parameter that, by studying the algorithm, could combine the conventional max pooling with average pooling methods. Second, we present a classification strategy, in which an image is divided into two cells and respectively categorized. The proposed approach was evaluated on three datasets: two ground truth image sequences and the University of California, San Diego, anomaly detection dataset. The results demonstrate that the proposed approach performs more effectively and easily than other methods.
Fenner, Jack N
2005-10-01
The length of the human generation interval is a key parameter when using genetics to date population divergence events. However, no consensus exists regarding the generation interval length, and a wide variety of interval lengths have been used in recent studies. This makes comparison between studies difficult, and questions the accuracy of divergence date estimations. Recent genealogy-based research suggests that the male generation interval is substantially longer than the female interval, and that both are greater than the values commonly used in genetics studies. This study evaluates each of these hypotheses in a broader cross-cultural context, using data from both nation states and recent hunter-gatherer societies. Both hypotheses are supported by this study; therefore, revised estimates of male, female, and overall human generation interval lengths are proposed. The nearly universal, cross-cultural nature of the evidence justifies using these proposed estimates in Y-chromosomal, mitochondrial, and autosomal DNA-based population divergence studies.
Le Huec, Jean Charles; Hasegawa, Kazuhiro
2016-11-01
Sagittal balance analysis has gained importance and the measure of the radiographic spinopelvic parameters is now a routine part of many interventions of spine surgery. Indeed, surgical correction of lumbar lordosis must be proportional to the pelvic incidence (PI). The compensatory mechanisms [pelvic retroversion with increased pelvic tilt (PT) and decreased thoracic kyphosis] spontaneously reverse after successful surgery. This study is the first to provide 3D standing spinopelvic reference values from a large database of Caucasian (n = 137) and Japanese (n = 131) asymptomatic subjects. The key spinopelvic parameters [e.g., PI, PT, sacral slope (SS)] were comparable in Japanese and Caucasian populations. Three equations, namely lumbar lordosis based on PI, PT based on PI and SS based on PI, were calculated after linear regression modeling and were comparable in both populations: lumbar lordosis (L1-S1) = 0.54*PI + 27.6, PT = 0.44*PI - 11.4 and SS = 0.54*PI + 11.90. We showed that the key spinopelvic parameters obtained from a large database of healthy subjects were comparable for Causasian and Japanese populations. The normative values provided in this study and the equations obtained after linear regression modeling could help to estimate pre-operatively the lumbar lordosis restoration and could be also used as guidelines for spinopelvic sagittal balance.
Zimmer, Christoph
2016-01-01
Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802
Influence of counting chamber type on CASA outcomes of equine semen analysis.
Hoogewijs, M K; de Vliegher, S P; Govaere, J L; de Schauwer, C; de Kruif, A; van Soom, A
2012-09-01
Sperm motility is considered to be one of the key features of semen analysis. Assessment of motility is frequently performed using computer-assisted sperm analysis (CASA). Nevertheless, no uniform standards are present to analyse a semen sample using CASA. We hypothesised that the type of counting chamber used might influence the results of analysis and aimed to study the effect of chamber type on estimated concentration and motility of an equine semen sample assessed using CASA. Commonly used disposable Leja chambers of different depths were compared with disposable and reusable ISAS chambers, a Makler chamber and a World Health Organization (WHO) motility slide. Motility parameters and concentrations obtained with CASA using these different chambers were analysed. The NucleoCounter was used as gold standard for determining concentration. Concentration and motility parameters were significantly influenced by the chamber type used. Using the NucleoCounter as the gold standard for determining concentration, the correlation coefficients were low for all of the various chambers evaluated, with the exception of the 12 µm deep Leja chamber. Filling a chamber by capillary forces resulted in a lower observed concentration and reduced motility parameters. All chambers evaluated in this study resulted in significant lower progressive motility than the WHO prepared slide, with the exception of the Makler chamber, which resulted in a slight, but statistically significant, increase in progressive motility estimates. Computer-assisted sperm analysis can only provide a rough estimate of sperm concentration and overestimation is likely when drop-filled slides with a coverslip are used. Motility estimates using CASA are highly influenced by the counting chamber; therefore, a complete description of the chamber type used should be provided in semen reports and in scientific articles. © 2011 EVJ Ltd.
NASA Astrophysics Data System (ADS)
Wang, S. G.; Li, X.; Han, X. J.; Jin, R.
2011-05-01
Radar remote sensing has demonstrated its applicability to the retrieval of basin-scale soil moisture. The mechanism of radar backscattering from soils is complicated and strongly influenced by surface roughness. Additionally, retrieval of soil moisture using AIEM (advanced integrated equation model)-like models is a classic example of underdetermined problem due to a lack of credible known soil roughness distributions at a regional scale. Characterization of this roughness is therefore crucial for an accurate derivation of soil moisture based on backscattering models. This study aims to simultaneously obtain surface roughness parameters (standard deviation of surface height σ and correlation length cl) along with soil moisture from multi-angular ASAR images by using a two-step retrieval scheme based on the AIEM. The method firstly used a semi-empirical relationship that relates the roughness slope, Zs (Zs = σ2/cl) and the difference in backscattering coefficient (Δσ) from two ASAR images acquired with different incidence angles. Meanwhile, by using an experimental statistical relationship between σ and cl, both these parameters can be estimated. Then, the deduced roughness parameters were used for the retrieval of soil moisture in association with the AIEM. An evaluation of the proposed method was performed in an experimental area in the middle stream of the Heihe River Basin, where the Watershed Allied Telemetry Experimental Research (WATER) was taken place. It is demonstrated that the proposed method is feasible to achieve reliable estimation of soil water content. The key challenge is the presence of vegetation cover, which significantly impacts the estimates of surface roughness and soil moisture.
McDonald, Scott A; Mohamed, Rosmawati; Dahlui, Maznah; Naning, Herlianna; Kamarulzaman, Adeeba
2014-11-07
Collecting adequate information on key epidemiological indicators is a prerequisite to informing a public health response to reduce the impact of hepatitis C virus (HCV) infection in Malaysia. Our goal was to overcome the acute data shortage typical of low/middle income countries using statistical modelling to estimate the national HCV prevalence and the distribution over transmission pathways as of the end of 2009. Multi-parameter evidence synthesis methods were applied to combine all available relevant data sources - both direct and indirect - that inform the epidemiological parameters of interest. An estimated 454,000 (95% credible interval [CrI]: 392,000 to 535,000) HCV antibody-positive individuals were living in Malaysia in 2009; this represents 2.5% (95% CrI: 2.2-3.0%) of the population aged 15-64 years. Among males of Malay ethnicity, for 77% (95% CrI: 69-85%) the route of probable transmission was active or a previous history of injecting drugs. The corresponding proportions were smaller for male Chinese and Indian/other ethnic groups (40% and 71%, respectively). The estimated prevalence in females of all ethnicities was 1% (95% CrI: 0.6 to 1.4%); 92% (95% CrI: 88 to 95%) of infections were attributable to non-drug injecting routes of transmission. The prevalent number of persons living with HCV infection in Malaysia is estimated to be very high. Low/middle income countries often lack a comprehensive evidence base; however, evidence synthesis methods can assist in filling the data gaps required for the development of effective policy to address the future public health and economic burden due to HCV.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
NASA Astrophysics Data System (ADS)
Shi, Y.; Davis, K. J.; Zhang, F.; Duffy, C.; Yu, X.
2014-12-01
A coupled physically based land surface hydrologic model, Flux-PIHM, has been developed by incorporating a land surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Flux-PIHM has been implemented and manually calibrated at the Shale Hills watershed (0.08 km2) in central Pennsylvania. Model predictions of discharge, point soil moisture, point water table depth, sensible and latent heat fluxes, and soil temperature show good agreement with observations. When calibrated only using discharge, and soil moisture and water table depth at one point, Flux-PIHM is able to resolve the observed 101 m scale soil moisture pattern at the Shale Hills watershed when an appropriate map of soil hydraulic properties is provided. A Flux-PIHM data assimilation system has been developed by incorporating EnKF for model parameter and state estimation. Both synthetic and real data assimilation experiments have been performed at the Shale Hills watershed. Synthetic experiment results show that the data assimilation system is able to simultaneously provide accurate estimates of multiple parameters. In the real data experiment, the EnKF estimated parameters and manually calibrated parameters yield similar model performances, but the EnKF method significantly decreases the time and labor required for calibration. The data requirements for accurate Flux-PIHM parameter estimation via data assimilation using synthetic observations have been tested. Results show that by assimilating only in situ outlet discharge, soil water content at one point, and the land surface temperature averaged over the whole watershed, the data assimilation system can provide an accurate representation of watershed hydrology. Observations of these key variables are available with national and even global spatial coverage (e.g., MODIS surface temperature, SMAP soil moisture, and the USGS gauging stations). National atmospheric reanalysis products, soil databases and land cover databases (e.g., NLDAS-2, SSURGO, NLCD) can provide high resolution forcing and input data. Therefore the Flux-PIHM data assimilation system could be readily expanded to other watersheds to provide regional scale land surface and hydrologic reanalysis with high spatial temporal resolution.
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling
NASA Astrophysics Data System (ADS)
Thomas, M. E.; Neuberg, J. W.
2015-12-01
The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.
Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision
Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun
2017-01-01
A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772
Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries
Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John
2015-01-01
Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866
Kustas, William P.; Moran, M.S.; Humes, K.S.; Stannard, D.I.; Pinter, P. J.; Hipps, L.E.; Swiatek, E.; Goodrich, D.C.
1994-01-01
Remotely sensed data in the visible, near-infrared, and thermal-infrared wave bands were collected from a low-flying aircraft during the Monsoon '90 field experiment. Monsoon '90 was a multidisciplinary experiment conducted in a semiarid watershed. It had as one of its objectives the quantification of hydrometeorological fluxes during the “monsoon” or wet season. The remote sensing observations along with micrometeprological and atmospheric boundary layer (ABL) data were used to compute the surface energy balance over a range of spatial scales. The procedure involved averaging multiple pixels along transects flown over the meteorological and flux (METFLUX) stations. Average values of the spectral reflectance and thermal-infrared temperatures were computed for pixels of order 10−1 to 101 km in length and were used with atmospheric data for evaluating net radiation (Rn), soil heat flux (G), and sensible (H) and latent (LE) heat fluxes at these same length scales. The model employs a single-layer resistance approach for estimating H that requires wind speed and air temperature in the ABL and a remotely sensed surface temperature. The values of Rn and G are estimated from remote sensing information together with near-surface observations of air temperature, relative humidity, and solar radiation. Finally, LE is solved as the residual term in the surface energy balance equation. Model calculations were compared to measurements from the METFLUX network for three days having different environmental conditions. Average percent differences for the three days between model and the METFLUX estimates of the local fluxes were about 5% for Rn, 20% for Gand H, and 15% for LE. Larger differences occurred during partly cloudy conditions because of errors in interpreting the remote sensing data and the higher spatial and temporal variation in the energy fluxes. Minor variations in modeled energy fluxes were observed when the pixel size representing the remote sensing inputs changed from 0.2 to 2 km. Regional scale estimates of the surface energy balance using bulk ABL properties for the model parameters and input variables and the 10-km pixel data differed from the METFLUX network averages by about 4% for Rn, 10% for G and H, and 15% for LE. Model sensitivity in calculating the turbulent fluxes H and LE to possible variations in key model parameters (i.e., the roughness lengths for heat and momentum) was found to be fairly significant. Therefore the reliability of the methods for estimating key model parameters and potential errors needs further testing over different ecosystems and environmental conditions.
NASA Astrophysics Data System (ADS)
Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.
2015-12-01
What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results should also be relevant for whomever needed to perform spatial interpolation or out-of-sample extension (e.g. kriging), machine learning and feature detection, on geological data. We present procedural details but focus on high-level results that have real-world implications for the study of Venus, Earth, other planets, and moons.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Tichauer, Kenneth M.; Osswald, Christian R.; Dosmar, Emily; Guthrie, Micah J.; Hones, Logan; Sinha, Lagnojita; Xu, Xiaochun; Mieler, William F.; St. Lawrence, Keith; Kang-Mieler, Jennifer J.
2015-06-01
Clinical symptoms of diabetic retinopathy are not detectable until damage to the retina reaches an irreversible stage, at least by today's treatment standards. As a result, there is a push to develop new, "sub-clinical" methods of predicting the onset of diabetic retinopathy before the onset of irreversible damage. With diabetic retinopathy being associated with the accumulation of long-term mild damage to the retinal vasculature, retinal blood vessel permeability has been proposed as a key parameter for detecting preclinical stages of retinopathy. In this study, a kinetic modeling approach used to quantify vascular permeability in dynamic contrast-enhanced medical imaging was evaluated in noise simulations and then applied to retinal videoangiography data in a diabetic rat for the first time to determine the potential for this approach to be employed clinically as an early indicator of diabetic retinopathy. Experimental levels of noise were found to introduce errors of less than 15% in estimates of blood flow and extraction fraction (a marker of vascular permeability), and fitting of rat retinal fluorescein angiography data provided stable maps of both parameters.
Improving Assimilated Global Climate Data Using TRMM and SSM/I Rainfall and Moisture Data
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. Work has been underway at NASA's Data Assimilation Office to explore the use of TRMM and SSM/I-derived rainfall and total precipitable water (TPW) data in global data assimilation to directly constrain these hydrological parameters. We found that assimilating these data types improves not only the precipitation and moisture estimates but also key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation in the tropics. We will present results showing that assimilating TRMM and SSM/I 6-hour averaged rain rates and TPW estimates significantly reduces the state-dependent systematic errors in assimilated products. Specifically, rainfall assimilation improves cloud and latent heating distributions, which, in turn, improves the cloudy-sky radiation and the large-scale circulation, while TPW assimilation reduces moisture biases to improve radiation in clear-sky regions. Rainfall and TPW assimilation also improves tropical forecasts beyond 1 day.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.
Madenjian, C.P.; Chipman, B.D.; Marsden, J.E.
2008-01-01
Sea lamprey (Petromyzon marinus) control in North America costs millions of dollars each year, and control measures are guided by assessment of lamprey-induced damage to fisheries. The favored prey of sea lamprey in freshwater ecosystems has been lake trout (Salvelinus namaycush). A key parameter in assessing sea lamprey damage, as well as managing lake trout fisheries, is the probability of an adult lake trout surviving a lamprey attack. The conventional value for this parameter has been 0.55, based on laboratory experiments. In contrast, based on catch curve analysis, mark-recapture techniques, and observed wounding rates, we estimated that adult lake trout in Lake Champlain have a 0.74 probability of surviving a lamprey attack. Although sea lamprey growth in Lake Champlain was lower than that observed in Lake Huron, application of an individual-based model to both lakes indicated that the probability of surviving an attack in Lake Champlain was only 1.1 times higher than that in Lake Huron. Thus, we estimated that lake trout survive a lamprey attack in Lake Huron with a probability of 0.66. Therefore, our results suggested that lethality of a sea lamprey attack on lake trout has been overestimated in previous model applications used in fisheries management. ?? 2008 NRC.
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Mixing rates and limit theorems for random intermittent maps
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Bose, Christopher
2016-04-01
We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.
Adverse Selection and an Individual Mandate: When Theory Meets Practice*
Hackmann, Martin B.; Kolstad, Jonathan T.; Kowalski, Amanda E.
2014-01-01
We develop a model of selection that incorporates a key element of recent health reforms: an individual mandate. Using data from Massachusetts, we estimate the parameters of the model. In the individual market for health insurance, we find that premiums and average costs decreased significantly in response to the individual mandate. We find an annual welfare gain of 4.1% per person or $51.1 million annually in Massachusetts as a result of the reduction in adverse selection. We also find smaller post-reform markups. PMID:25914412
Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application
NASA Astrophysics Data System (ADS)
Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.
2017-12-01
This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Quantitative genetics of disease traits.
Wray, N R; Visscher, P M
2015-04-01
John James authored two key papers on the theory of risk to relatives for binary disease traits and the relationship between parameters on the observed binary scale and an unobserved scale of liability (James Annals of Human Genetics, 1971; 35: 47; Reich, James and Morris Annals of Human Genetics, 1972; 36: 163). These two papers are John James' most cited papers (198 and 328 citations, November 2014). They have been influential in human genetics and have recently gained renewed popularity because of their relevance to the estimation of quantitative genetics parameters for disease traits using SNP data. In this review, we summarize the two early papers and put them into context. We show recent extensions of the theory for ascertained case-control data and review recent applications in human genetics. © 2015 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.
2015-03-01
In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.
Black Hole Mergers as Probes of Structure Formation
NASA Technical Reports Server (NTRS)
Alicea-Munoz, E.; Miller, M. Coleman
2008-01-01
Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Ying; Xie, Cailang; Liao, Qin; Zhao, Wei; Zeng, Guihua; Huang, Duan
2017-08-01
The survival of Gaussian quantum states in a turbulent atmospheric channel is of crucial importance in free-space continuous-variable (CV) quantum key distribution (QKD), in which the transmission coefficient will fluctuate in time, thus resulting in non-Gaussian quantum states. Different from quantum hacking of the imperfections of practical devices, here we propose a different type of attack by exploiting the security loopholes that occur in a real lossy channel. Under a turbulent atmospheric environment, the Gaussian states are inevitably afflicted by decoherence, which would cause a degradation of the transmitted entanglement. Therefore, an eavesdropper can perform an intercept-resend attack by applying an entanglement-distillation operation on the transmitted non-Gaussian mixed states, which allows the eavesdropper to bias the estimation of the parameters and renders the final keys shared between the legitimate parties insecure. Our proposal highlights the practical CV QKD vulnerabilities with free-space quantum channels, including the satellite-to-earth links, ground-to-ground links, and a link from moving objects to ground stations.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
NASA Astrophysics Data System (ADS)
Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.
2014-02-01
A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.
Earth's magnetic field effect on MUF calculation and consequences for hmF2 trend estimates
NASA Astrophysics Data System (ADS)
Elias, Ana G.; Zossi, Bruno S.; Yiğit, Erdal; Saavedra, Zenon; de Haro Barbas, Blas F.
2017-10-01
Knowledge of the state of the upper atmosphere, and in particular of the ionosphere, is essential in several applications such as systems used in radio frequency communications, satellite positioning and navigation. In general, these systems depend on the state and evolution of the ionosphere. In all applications involving the ionosphere an essential task is to determine the path and modifications of ray propagation through the ionospheric plasma. The ionospheric refractive index and the maximum usable frequency (MUF) that can be received over a given distance are some key parameters that are crucial for such technological applications. However, currently the representation of these parameters are in general simplified, neglecting the effects of Earth's magnetic field. The value of M(3000)F2, related to the MUF that can be received over 3000 km is routinely scaled from ionograms using a technique which also neglects the geomagnetic field effects assuming a standard simplified propagation model. M(3000)F2 is expected to be affected by a systematic trend linked to the secular variations of Earth's magnetic field. On the other hand, among the upper atmospheric effects expected from increasing greenhouse gases concentration is the lowering of the F2-layer peak density height, hmF2. This ionospheric parameter is usually estimated using the M(3000)F2 factor, so it would also carry this ;systematic trend;. In this study, the geomagnetic field effect on MUF estimations is analyzed as well as its impact on hmF2 long-term trend estimations. We find that M(3000)F2 increases when the geomagnetic field is included in its calculation, and hence hmF2, estimated using existing methods involving no magnetic field for M(3000)F2 scaling, would present a weak but steady trend linked to these variations which would increase or compensate the few kilometers decrease ( 2 km per decade) expected from greenhouse gases effect.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Nonmarket economic user values of the Florida Keys/Key West
Vernon R. Leeworthy; J. Michael Bowker
1997-01-01
This report provides estimates of the nonmarket economic user values for recreating visitors to the Florida Keys/Key West that participated in natural resource-based activities. Results from estimated travel cost models are presented, including visitorâs responses to prices and estimated per person-trip user values. Annual user values are also calculated and presented...
NASA Astrophysics Data System (ADS)
Tagaris, Efthimios; -Eleni Sotiropoulou, Rafaella; Sotiropoulos, Andreas; Spanos, Ioannis; Milonas, Panayiotis; Michaelakis, Antonios
2017-04-01
Establishment and seasonal abundance of a region for Invasive Mosquito Species (IMS) are related to climatic parameters such as temperature and precipitation. In this work the current state is assessed using data from the European Climate Assessment and Dataset (ECA&D) project over Greece and Italy for the development of current spatial risk databases of IMS. Results are validated from the installation of a prototype IMS monitoring device that has been designed and developed in the framework of the LIFE CONOPS project at key points across the two countries. Since climate models suggest changes in future temperature and precipitation rates, the future potentiality of IMS establishment and spread over Greece and Italy is assessed using the climatic parameters in 2050's provided by the NASA GISS GCM ModelE under the IPCC-A1B emissions scenarios. The need for regional climate projections in a finer grid size is assessed using the Weather Research and Forecasting (WRF) model to dynamically downscale GCM simulations. The estimated changes in the future meteorological parameters are combined with the observation data in order to estimate the future levels of the climatic parameters of interest. The final product includes spatial distribution maps presenting the future suitability of a region for the establishment and seasonal abundance of the IMS over Greece and Italy. Acknowledgement: LIFE CONOPS project "Development & demonstration of management plans against - the climate change enhanced - invasive mosquitoes in S. Europe" (LIFE12 ENV/GR/000466).
NASA Astrophysics Data System (ADS)
Letan, Amelie; Mishchik, Konstantin; Audouard, Eric; Hoenninger, Clemens; Mottay, Eric P.
2017-03-01
With the development of high average power, high repetition rate, industrial ultrafast lasers, it is now possible to achieve a high throughput with femtosecond laser processing, providing that the operating parameters are finely tuned to the application. Femtosecond lasers play a key role in these processes, due to their ability to high quality micro processing. They are able to drill high thickness holes (up to 1 mm) with arbitrary shapes, such as zero-conicity or even inversed taper, but can also perform zero-taper cutting. A clear understanding of all the processing steps necessary to optimize the processing speed is a main challenge for industrial developments. Indeed, the laser parameters are not independent of the beam steering devices. Pulses energy and repetition rate have to be precisely adjusted to the beam angle with the sample, and to the temporal and spatial sequences of pulses superposition. The purpose of the present work is to identify the role of these parameters for high aspect ratio drilling and cutting not only with experimental trials, but also with numerical estimations, using a simple engineering model based on the two temperature description of ultra-fast ablation. Assuming a nonlinear logarithmic response of the materials to ultrafast pulses, each material can be described by only two adjustable parameters. Simple assumptions allow to predict the effect of beam velocity and non-normal incident beams to estimate profile shapes and processing time.
NASA Technical Reports Server (NTRS)
Freeman, A.; Villasenor, J.; Klein, J. D.
1991-01-01
We describe the calibration and analysis of multi-frequency, multi-polarization radar backscatter signatures over an agriculture test site in the Netherlands. The calibration procedure involved two stages: in the first stage, polarimetric and radiometric calibrations (ignoring noise) were carried out using square-base trihedral corner reflector signatures and some properties of the clutter background. In the second stage, a novel algorithm was used to estimate the noise level in the polarimetric data channels by using the measured signature of an idealized rough surface with Bragg scattering (the ocean in this case). This estimated noise level was then used to correct the measured backscatter signatures from the agriculture fields. We examine the significance of several key parameters extracted from the calibrated and noise-corrected backscatter signatures. The significance is assessed in terms of the ability to uniquely separate among classes from 13 different backscatter types selected from the test site data, including eleven different crops, one forest and one ocean area. Using the parameters with the highest separation for a given class, we use a hierarchical algorithm to classify the entire image. We find that many classes, including ocean, forest, potato, and beet, can be identified with high reliability, while the classes for which no single parameter exhibits sufficient separation have higher rates of misclassification. We expect that modified decision criteria involving simultaneous consideration of several parameters increase performance for these classes.
NASA Astrophysics Data System (ADS)
Semenko, E. A.; Romanyuk, I. I.; Semenova, E. S.; Moiseeva, A. V.; Kudryavtsev, D. O.; Yakunin, I. A.
2017-10-01
Observations of the chemically peculiar star HD 27404 with the 6-m SAO RAS telescope showed a strong magnetic field with the longitudinal field component varying in a complicated way in the range of -2.5 to 1 kG. Fundamental parameters of the star ( T eff = 11 300 K, log g = 3.9) were estimated analyzing photometric indices in the Geneva and in the Stro¨ mgren-Crawford photometric systems. We detected weak radial velocity variations which can be due to the presence of a close star companion or chemical spots in the photosphere. Rapid estimation of the key chemical element abundance allows us to refer HD 27404 to a SiCr or Si+ chemically peculiar A0-B9 star.
CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajfirouze, E.; Safari, H.
2012-01-10
Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponentmore » of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.« less
Estimation of the dispersal distances of an aphid-borne virus in a patchy landscape
Soubeyrand, Samuel; Dallot, Sylvie; Labonne, Gérard; Chadœuf, Joël; Jacquot, Emmanuel
2018-01-01
Characterising the spatio-temporal dynamics of pathogens in natura is key to ensuring their efficient prevention and control. However, it is notoriously difficult to estimate dispersal parameters at scales that are relevant to real epidemics. Epidemiological surveys can provide informative data, but parameter estimation can be hampered when the timing of the epidemiological events is uncertain, and in the presence of interactions between disease spread, surveillance, and control. Further complications arise from imperfect detection of disease and from the huge number of data on individual hosts arising from landscape-level surveys. Here, we present a Bayesian framework that overcomes these barriers by integrating over associated uncertainties in a model explicitly combining the processes of disease dispersal, surveillance and control. Using a novel computationally efficient approach to account for patch geometry, we demonstrate that disease dispersal distances can be estimated accurately in a patchy (i.e. fragmented) landscape when disease control is ongoing. Applying this model to data for an aphid-borne virus (Plum pox virus) surveyed for 15 years in 605 orchards, we obtain the first estimate of the distribution of flight distances of infectious aphids at the landscape scale. About 50% of aphid flights terminate beyond 90 m, which implies that most infectious aphids leaving a tree land outside the bounds of a 1-ha orchard. Moreover, long-distance flights are not rare–10% of flights exceed 1 km. By their impact on our quantitative understanding of winged aphid dispersal, these results can inform the design of management strategies for plant viruses, which are mainly aphid-borne. PMID:29708968
Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties
NASA Astrophysics Data System (ADS)
Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc
2015-12-01
In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).
NASA Astrophysics Data System (ADS)
Song, Wanjuan; Mu, Xihan; Ruan, Gaiyan; Gao, Zhan; Li, Linyuan; Yan, Guangjian
2017-06-01
Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.
Estimation of Quasi-Stiffness and Propulsive Work of the Human Ankle in the Stance Phase of Walking
Shamaei, Kamran; Sawicki, Gregory S.; Dollar, Aaron M.
2013-01-01
Characterizing the quasi-stiffness and work of lower extremity joints is critical for evaluating human locomotion and designing assistive devices such as prostheses and orthoses intended to emulate the biological behavior of human legs. This work aims to establish statistical models that allow us to predict the ankle quasi-stiffness and net mechanical work for adults walking on level ground. During the stance phase of walking, the ankle joint propels the body through three distinctive phases of nearly constant stiffness known as the quasi-stiffness of each phase. Using a generic equation for the ankle moment obtained through an inverse dynamics analysis, we identify key independent parameters needed to predict ankle quasi-stiffness and propulsive work and also the functional form of each correlation. These parameters include gait speed, ankle excursion, and subject height and weight. Based on the identified form of the correlation and key variables, we applied linear regression on experimental walking data for 216 gait trials across 26 subjects (speeds from 0.75–2.63 m/s) to obtain statistical models of varying complexity. The most general forms of the statistical models include all the key parameters and have an R2 of 75% to 81% in the prediction of the ankle quasi-stiffnesses and propulsive work. The most specific models include only subject height and weight and could predict the ankle quasi-stiffnesses and work for optimal walking speed with average error of 13% to 30%. We discuss how these models provide a useful framework and foundation for designing subject- and gait-specific prosthetic and exoskeletal devices designed to emulate biological ankle function during level ground walking. PMID:23555839
Rejman, Marek
2013-01-01
The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Wilson, R; Abbott, J H
2018-04-01
To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salvucci, Guido D.; Gentine, Pierre
2013-04-01
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (Csurf). Csurf accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of Csurf to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and Csurf can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
Salvucci, Guido D; Gentine, Pierre
2013-04-16
The ability to predict terrestrial evapotranspiration (E) is limited by the complexity of rate-limiting pathways as water moves through the soil, vegetation (roots, xylem, stomata), canopy air space, and the atmospheric boundary layer. The impossibility of specifying the numerous parameters required to model this process in full spatial detail has necessitated spatially upscaled models that depend on effective parameters such as the surface vapor conductance (C(surf)). C(surf) accounts for the biophysical and hydrological effects on diffusion through the soil and vegetation substrate. This approach, however, requires either site-specific calibration of C(surf) to measured E, or further parameterization based on metrics such as leaf area, senescence state, stomatal conductance, soil texture, soil moisture, and water table depth. Here, we show that this key, rate-limiting, parameter can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and E. The relation is that the vertical variance of the relative humidity profile is less than would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. It is found to hold over a wide range of climate conditions (arid-humid) and limiting factors (soil moisture, leaf area, energy). With this relation, estimates of E and C(surf) can be obtained globally from widely available meteorological measurements, many of which have been archived since the early 1900s. In conjunction with precipitation and stream flow, long-term E estimates provide insights and empirical constraints on projected accelerations of the hydrologic cycle.
NASA Technical Reports Server (NTRS)
2008-01-01
Calculating an accurate nutation time constant (NTC), or nutation rate of growth, for a spinning upper stage is important for ensuring mission success. Spacecraft nutation, or wobble, is caused by energy dissipation anywhere in the system. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and, if it is in a state of resonance, the NTC can become short enough to violate mission constraints. The Spinning Slosh Test Rig (SSTR) is a forced-motion spin table where fluid dynamic effects in full-scale fuel tanks can be tested in order to obtain key parameters used to calculate the NTC. We accomplish this by independently varying nutation frequency versus the spin rate and measuring force and torque responses on the tank. This method was used to predict parameters for the Genesis, Contour, and Stereo missions, whose tanks were mounted outboard from the spin axis. These parameters are incorporated into a mathematical model that uses mechanical analogs, such as pendulums and rotors, to simulate the force and torque resonances associated with fluid slosh.
Intrinsic physical conditions and structure of relativistic jets in active galactic nuclei
NASA Astrophysics Data System (ADS)
Nokhrina, E. E.; Beskin, V. S.; Kovalev, Y. Y.; Zheltoukhov, A. A.
2015-03-01
The analysis of the frequency dependence of the observed shift of the cores of relativistic jets in active galactic nuclei (AGNs) allows us to evaluate the number density of the outflowing plasma ne and, hence, the multiplicity parameter λ = ne/nGJ, where nGJ is the Goldreich-Julian number density. We have obtained the median value for λmed = 3 × 1013 and the median value for the Michel magnetization parameter σM, med = 8 from an analysis of 97 sources. Since the magnetization parameter can be interpreted as the maximum possible Lorentz factor Γ of the bulk motion which can be obtained for relativistic magnetohydrodynamic (MHD) flow, this estimate is in agreement with the observed superluminal motion of bright features in AGN jets. Moreover, knowing these key parameters, one can determine the transverse structure of the flow. We show that the poloidal magnetic field and particle number density are much larger in the centre of the jet than near the jet boundary. The MHD model can also explain the typical observed level of jet acceleration. Finally, casual connectivity of strongly collimated jets is discussed.
Adverse surgical outcomes in screen-detected ductal carcinoma in situ of the breast.
Thomas, Jeremy; Hanby, Andrew; Pinder, Sarah E; Ball, Graham; Lawrence, Gill; Maxwell, Anthony; Wallis, Matthew; Evans, Andrew; Dobson, Hilary; Clements, Karen; Thompson, Alastair
2014-07-01
The Sloane Project is the largest prospective audit of ductal carcinoma in situ (DCIS) worldwide, with over 12,000 patients registered between 2003 and 2012, accounting for 50% of screen-detected DCIS diagnosed in the United Kingdom (UK) over the period of accrual. Complete multidisciplinary data from 8313 patients with screen-detected DCIS were analysed for surgical outcome in relation to key radiological and pathological parameters for the cohort and also by hospital of treatment. Adverse surgical outcomes were defined as either failed breast conservation surgery (BCS) or mastectomy for small lesions (<20mm) (MFSL). Inter-hospital variation was analysed by grouping hospitals into high, medium and low frequency subgroups for these two adverse outcomes. Patients with failed BCS or MFSL together accounted for 49% of all mastectomies. Of 6633 patients embarking on BCS, 799 (12.0%) required mastectomy. MFSL accounted for 510 (21%) of 2479 mastectomy patients. Failed BCS was associated with significant radiological under-estimation of disease extent and MFSL significant radiological over-estimation of disease extent. There was considerable and significant inter-hospital variation in failed BCS (range 3-32%) and MFSL (0-60%) of a hospital's BCS/mastectomy workload respectively. Conversely, there were no differences between the key radiological and pathological parameters in high, medium and low frequency adverse-outcome hospitals. This evidence suggests significant practice variation, not patient factors, is responsible for these adverse surgical outcomes in screen-detected DCIS. The Sloane Project provides an evidence base for future practice benchmarking. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, E.; Madigan, M.
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrainmore » high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC« less
NASA Astrophysics Data System (ADS)
Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua
2018-03-01
How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.
A physically based analytical model of flood frequency curves
NASA Astrophysics Data System (ADS)
Basso, S.; Schirmer, M.; Botter, G.
2016-09-01
Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.
Applications of bioenergetics models to fish ecology and management: where do we go from here?
Hansen, Michael J.; Boisclair, Daniel; Brandt, Stephen B.; Hewett, Steven W.; Kitchell, James F.; Lucas, Martyn C.; Ney, John J.
1993-01-01
Papers and panel discussions given during a 1992 symposium on bioenergetics models are summarized. Bioenergetics models have been applied to a variety of research and management questions related to fish stocks, populations, food webs, and ecosystems. Applications include estimates of the intensity and dynamics of predator-prey interactions, nutrient cycling within aquatic food webs of varying trophic structure, and food requirements of single animals, whole populations, and communities of fishes. As tools in food web and ecosystem applications, bioenergetics models have been used to compare forage consumption by salmonid predators across the Laurentian Great Lakes for single populations and whole communities, and to estimate the growth potential of pelagic predators in Chesapeake Bay and Lake Ontario. Some critics say that bioenergetics models lack sufficient detail to produce reliable results in such field applications, whereas others say that the models are too complex to be useful tools for fishery managers. Nevertheless, bioenergetics models have achieved notable predictive successes. Improved estimates are needed for model parameters such as metabolic costs of activity, and more complete studies are needed of the bioenergetics of larval and juvenile fishes. Future research on bioenergetics should include laboratory and field measurements of key model parameters such as weight-dependent maximum consumption, respiration and activity, and thermal habitats actually occupied by fish. Future applications of bioenergetics models to fish populations also depend on accurate estimates of population sizes and survival rates.
NASA Astrophysics Data System (ADS)
Wang, S. G.; Li, X.; Han, X. J.; Jin, R.
2010-06-01
Radar remote sensing has demonstrated its applicability to the retrieval of basin-scale soil moisture. The mechanism of radar backscattering from soils is complicated and strongly influenced by surface roughness. Furthermore, retrieval of soil moisture using AIEM-like models is a classic example of the underdetermined problem due to a lack of credible known soil roughness distributions at a regional scale. Characterization of this roughness is therefore crucial for an accurate derivation of soil moisture based on backscattering models. This study aims to directly obtain surface roughness information along with soil moisture from multi-angular ASAR images. The method first used a semi-empirical relationship that connects the roughness slope (Zs) and the difference in backscattering coefficient (Δσ) from ASAR data in different incidence angles, in combination with an optimal calibration form consisting of two roughness parameters (the standard deviation of surface height and the correlation length), to estimate the roughness parameters. The deduced surface roughness was then used in the AIEM model for the retrieval of soil moisture. An evaluation of the proposed method was performed in a grassland site in the middle stream of the Heihe River Basin, where the Watershed Allied Telemetry Experimental Research (WATER) was taken place. It has demonstrated that the method is feasible to achieve reliable estimation of soil water content. The key challenge to surface soil moisture retrieval is the presence of vegetation cover, which significantly impacts the estimates of surface roughness and soil moisture.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Estimation of the viscosities of liquid binary alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Su, Xiang-Yu
2018-01-01
As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
Khan, Md Mohib-Ul-Haque; Jain, Siddharth; Vaezi, Mahdi; Kumar, Amit
2016-02-01
Economic competitiveness is one of the key factors in making decisions towards the development of waste conversion facilities and devising a sustainable waste management strategy. The goal of this study is to develop a framework, as well as to develop and demonstrate a comprehensive techno-economic model to help county and municipal decision makers in establishing waste conversion facilities. The user-friendly data-intensive model, called the FUNdamental ENgineering PrinciplEs-based ModeL for Estimation of Cost of Energy and Fuels from MSW (FUNNEL-Cost-MSW), compares nine different waste management scenarios, including landfilling and composting, in terms of economic parameters such as gate fees and return on investment. In addition, a geographic information system (GIS) model was developed to determine suitable locations for waste conversion facilities and landfill sites based on integration of environmental, social, and economic factors. Finally, a case study on Parkland County and its surrounding counties in the province of Alberta, Canada, was conducted and a sensitivity analysis was performed to assess the influence of the key technical and economic parameters on the calculated results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Investigating the Martian Ionospheric Conductivity Using MAVEN Key Parameter Data
NASA Astrophysics Data System (ADS)
Aleryani, O.; Raftery, C. L.; Fillingim, M. O.; Fogle, A. L.; Dunn, P.; McFadden, J. P.; Connerney, J. E. P.; Mahaffy, P. R.; Ergun, R. E.; Andersson, L.
2015-12-01
Since the Viking orbiters and landers in 1976, the Martian atmospheric composition has scarcely been investigated. New data from the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, launched in 2013, allows for a thorough study of the electrically conductive nature of the Martian ionosphere. Determinations of the electrical conductivity will be made using in-situ atmospheric and ionospheric measurements, rather than scientific models for the first time. The objective of this project is to calculate the conductivity of the Martian atmosphere, whenever possible, throughout the trajectory of the MAVEN spacecraft. MAVEN instrumentation used includes the Neutral Gas and Ion Mass Spectrometer (NGIMS) for neutral species density, the Suprathermal and Thermal Ion Compositions (STATIC) for ion composition, temperature and density, the Magnetometer (MAG) for the magnetic field strength and the Langmuir Probe and Waves (LPW) for electron temperature and density. MAVEN key parameter data are used for these calculations. We compare our results with previous, model-based estimates of the conductivity. These results will allow us to quantify the flow of atmospheric electric currents which can be analyzed further for a deeper understanding of the Martian ionospheric electrodynamics, bringing us closer to understanding the mystery of the loss of the Martian atmosphere.
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design
Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.
Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The results showed that the hybrid algorithm successfully predicted the fracture parametrization, geometry, and the fluid content within the modeled reservoir. The method was also applied on an elastic tensor extracted from the Weyburn field in Saskatchewan, Canada. The solution suggested no presence of fractures but only a VTI system caused by the shale layering in the targeted reservoir, this interpretation is supported by other Weyburn field data.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
NASA Astrophysics Data System (ADS)
Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra
2016-04-01
Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.
Economics of movable interior blankets for greenhouses
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, G.B.; Fohner, G.R.; Albright, L.D.
1981-01-01
A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Semertzidou, P; Piliposian, G T; Appleby, P G
2016-08-01
The residence time of (210)Pb created in the atmosphere by the decay of gaseous (222)Rn is a key parameter controlling its distribution and fallout onto the landscape. These in turn are key parameters governing the use of this natural radionuclide for dating and interpreting environmental records stored in natural archives such as lake sediments. One of the principal methods for estimating the atmospheric residence time is through measurements of the activities of the daughter radionuclides (210)Bi and (210)Po, and in particular the (210)Bi/(210)Pb and (210)Po/(210)Pb activity ratios. Calculations used in early empirical studies assumed that these were governed by a simple series of equilibrium equations. This approach does however have two failings; it takes no account of the effect of global circulation on spatial variations in the activity ratios, and no allowance is made for the impact of transport processes across the tropopause. This paper presents a simple model for calculating the distributions of (210)Pb, (210)Bi and (210)Po at northern mid-latitudes (30°-65°N), a region containing almost all the available empirical data. By comparing modelled (210)Bi/(210)Pb activity ratios with empirical data a best estimate for the tropospheric residence time of around 10 days is obtained. This is significantly longer than earlier estimates of between 4 and 7 days. The process whereby (210)Pb is transported into the stratosphere when tropospheric concentrations are high and returned from it when they are low, significantly increases the effective residence time in the atmosphere as a whole. The effect of this is to significantly enhance the long range transport of (210)Pb from its source locations. The impact is illustrated by calculations showing the distribution of (210)Pb fallout versus longitude at northern mid-latitudes. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
NASA Astrophysics Data System (ADS)
Su, H.; Yan, X. H.
2017-12-01
Subsurface thermal structure of the global ocean is a key factor that reflects the impact of the global climate variability and change. Accurately determining and describing the global subsurface and deeper ocean thermal structure from satellite measurements is becoming even more important for understanding the ocean interior anomaly and dynamic processes during recent global warming and hiatus. It is essential but challenging to determine the extent to which such surface remote sensing observations can be used to develop information about the global ocean interior. This study proposed a Support Vector Regression (SVR) method to estimate Subsurface Temperature Anomaly (STA) in the global ocean. The SVR model can well estimate the global STA upper 1000 m through a suite of satellite remote sensing observations of sea surface parameters (including Sea Surface Height Anomaly (SSHA), Sea Surface Temperature Anomaly (SSTA), Sea Surface Salinity Anomaly (SSSA) and Sea Surface Wind Anomaly (SSWA)) with in situ Argo data for training and testing at different depth levels. Here, we employed the MSE and R2 to assess SVR performance on the STA estimation. The results from the SVR model were validated for the accuracy and reliability using the worldwide Argo STA data. The average MSE and R2 of the 15 levels are 0.0090 / 0.0086 / 0.0087 and 0.443 / 0.457 / 0.485 for 2-attributes (SSHA, SSTA) / 3-attributes (SSHA, SSTA, SSSA) / 4-attributes (SSHA, SSTA, SSSA, SSWA) SVR, respectively. The estimation accuracy was improved by including SSSA and SSWA for SVR input (MSE decreased by 0.4% / 0.3% and R2 increased by 1.4% / 4.2% on average). While, the estimation accuracy gradually decreased with the increase of the depth from 500 m. The results showed that SSSA and SSWA, in addition to SSTA and SSHA, are useful parameters that can help estimate the subsurface thermal structure, as well as improve the STA estimation accuracy. In future, we can figure out more potential and useful sea surface parameters from satellite remote sensing as input attributes so as to further improve the STA sensing accuracy from machine learning. This study can provide a helpful technique for studying thermal variability in the ocean interior which has played an important role in recent global warming and hiatus from satellite observations over global scale.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Model selection as a science driver for dark energy surveys
NASA Astrophysics Data System (ADS)
Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin
2006-07-01
A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
Empirical estimation of school siting parameter towards improving children's safety
NASA Astrophysics Data System (ADS)
Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.
2014-02-01
Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
Zhang, Hang; Maloney, Laurence T.
2012-01-01
In decision from experience, the source of probability information affects how probability is distorted in the decision task. Understanding how and why probability is distorted is a key issue in understanding the peculiar character of experience-based decision. We consider how probability information is used not just in decision-making but also in a wide variety of cognitive, perceptual, and motor tasks. Very similar patterns of distortion of probability/frequency information have been found in visual frequency estimation, frequency estimation based on memory, signal detection theory, and in the use of probability information in decision-making under risk and uncertainty. We show that distortion of probability in all cases is well captured as linear transformations of the log odds of frequency and/or probability, a model with a slope parameter, and an intercept parameter. We then consider how task and experience influence these two parameters and the resulting distortion of probability. We review how the probability distortions change in systematic ways with task and report three experiments on frequency distortion where the distortions change systematically in the same task. We found that the slope of frequency distortions decreases with the sample size, which is echoed by findings in decision from experience. We review previous models of the representation of uncertainty and find that none can account for the empirical findings. PMID:22294978
Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, X.; Liu, H.
2017-09-01
The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.
Cao, Jianping; Weschler, Charles J; Luo, Jiajun; Zhang, Yinping
2016-01-19
The concentration of a gas-phase semivolatile organic compound (SVOC) in equilibrium with its mass-fraction in the source material, y0, and the coefficient for partitioning of an SVOC between clothing and air, K, are key parameters for estimating emission and subsequent dermal exposure to SVOCs. Most of the available methods for their determination depend on achieving steady-state in ventilated chambers. This can be time-consuming and of variable accuracy. Additionally, no existing method simultaneously determines y0 and K in a single experiment. In this paper, we present a sealed-chamber method, using early-stage concentration measurements, to simultaneously determine y0 and K. The measurement error for the method is analyzed, and the optimization of experimental parameters is explored. Using this method, y0 for phthalates (DiBP, DnBP, and DEHP) emitted by two types of PVC flooring, coupled with K values for these phthalates partitioning between a cotton T-shirt and air, were measured at 25 and 32 °C (room and skin temperatures, respectively). The measured y0 values agree well with results obtained by alternate methods. The changes of y0 and K with temperature were used to approximate the changes in enthalpy, ΔH, associated with the relevant phase changes. We conclude with suggestions for further related research.
Self-referenced continuous-variable quantum key distribution protocol
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin; ...
2015-10-21
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-referenced continuous-variable quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soh, Daniel Beom Soo; Sarovar, Mohan; Brif, Constantin
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice’s and Bob’s measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of themore » protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. Furthermore, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.« less
Self-Referenced Continuous-Variable Quantum Key Distribution Protocol
NASA Astrophysics Data System (ADS)
Soh, Daniel B. S.; Brif, Constantin; Coles, Patrick J.; Lütkenhaus, Norbert; Camacho, Ryan M.; Urayama, Junji; Sarovar, Mohan
2015-10-01
We introduce a new continuous-variable quantum key distribution (CV-QKD) protocol, self-referenced CV-QKD, that eliminates the need for transmission of a high-power local oscillator between the communicating parties. In this protocol, each signal pulse is accompanied by a reference pulse (or a pair of twin reference pulses), used to align Alice's and Bob's measurement bases. The method of phase estimation and compensation based on the reference pulse measurement can be viewed as a quantum analog of intradyne detection used in classical coherent communication, which extracts the phase information from the modulated signal. We present a proof-of-principle, fiber-based experimental demonstration of the protocol and quantify the expected secret key rates by expressing them in terms of experimental parameters. Our analysis of the secret key rate fully takes into account the inherent uncertainty associated with the quantum nature of the reference pulse(s) and quantifies the limit at which the theoretical key rate approaches that of the respective conventional protocol that requires local oscillator transmission. The self-referenced protocol greatly simplifies the hardware required for CV-QKD, especially for potential integrated photonics implementations of transmitters and receivers, with minimum sacrifice of performance. As such, it provides a pathway towards scalable integrated CV-QKD transceivers, a vital step towards large-scale QKD networks.
Network structure of production
Atalay, Enghin; Hortaçsu, Ali; Roberts, James; Syverson, Chad
2011-01-01
Complex social networks have received increasing attention from researchers. Recent work has focused on mechanisms that produce scale-free networks. We theoretically and empirically characterize the buyer–supplier network of the US economy and find that purely scale-free models have trouble matching key attributes of the network. We construct an alternative model that incorporates realistic features of firms’ buyer–supplier relationships and estimate the model’s parameters using microdata on firms’ self-reported customers. This alternative framework is better able to match the attributes of the actual economic network and aids in further understanding several important economic phenomena. PMID:21402924
Benefits of detailed models of muscle activation and mechanics
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.
A study of the longevity and operational reliability of Goddard Spacecraft, 1960-1980
NASA Technical Reports Server (NTRS)
Shockey, E. F.
1981-01-01
Compiled data regarding the design lives and lifetimes actually achieved by 104 orbiting satellites launched by the Goddard Spaceflight Center between the years 1960 and 1980 is analyzed. Historical trends over the entire 21 year period are reviewed, and the more recent data is subjected to an examination of several key parameters. An empirical reliability function is derived, and compared with various mathematical models. Data from related studies is also discussed. The results provide insight into the reliability history of Goddard spacecraft an guidance for estimating the reliability of future programs.
Body weight of hypersonic aircraft, part 1
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1988-01-01
The load bearing body weight of wing-body and all-body hypersonic aircraft is estimated for a wide variety of structural materials and geometries. Variations of weight with key design and configuration parameters are presented and discussed. Both hot and cool structure approaches are considered in isotropic, organic composite, and metal matrix composite materials; structural shells are sandwich or skin-stringer. Conformal and pillow-tank designs are investigated for the all-body shape. The results identify the most promising hypersonic aircraft body structure design approaches and their weight trends. Geometric definition of vehicle shapes and structural analysis methods are presented in appendices.
Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bomble, Yannick J; St. John, Peter C; Crowley, Michael F
2017-10-18
A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Contribution to Estimating Bearing Capacity of Pile in Clayey Soils
NASA Astrophysics Data System (ADS)
Drusa, Marián; Gago, Filip; Vlček, Jozef
2016-12-01
The estimation of real geotechnical parameters is key factor for safe and economic design of geotechnical structures. One of these are pile foundations, which require proper design and evaluation due to accessing more deep foundation soil and because remediation work of not bearable piles or broken piles is a crucial operation. For this reason, geotechnical field testing like cone penetration test (CPT), standard penetration (SPT) or dynamic penetration test (DP) are realized in order to receive continuous information about soil strata. Comparing with rotary core drilling type of survey with sampling, these methods are more progressive. From engineering geologist point of view, it is more important to know geological characterization of locality but geotechnical engineers have more interest above the real geotechnical parameters of foundation soils. The role of engineering geologist cannot be underestimated because important geological processes in origin or during history can explain behaviour of a geological environment. In effort to streamline the survey, investigation by penetration tests is done as it is able to provide enough information for designers. This paper deals with actual trends in pile foundation design; because there are no new standards and usable standards are very old. Estimation of the bearing capacity of a single pile can be demonstrated on the example of determination of the cone factor Nk from CPT testing. Then results were compared with other common methods.
Controlling the non-linear intracavity dynamics of large He-Ne laser gyroscopes
NASA Astrophysics Data System (ADS)
Cuccato, D.; Beghi, A.; Belfi, J.; Beverini, N.; Ortolan, A.; Di Virgilio, A.
2014-02-01
A model based on Lamb's theory of gas lasers is applied to a He-Ne ring laser (RL) gyroscope to estimate and remove the laser dynamics contribution from the rotation measurements. The intensities of the counter-propagating laser beams exiting one cavity mirror are continuously observed together with a monitor of the laser population inversion. These observables, once properly calibrated with a dedicated procedure, allow us to estimate cold cavity and active medium parameters driving the main part of the non-linearities of the system. The quantitative estimation of intrinsic non-reciprocal effects due to cavity and active medium non-linear coupling plays a key role in testing fundamental symmetries of space-time with RLs. The parameter identification and noise subtraction procedure has been verified by means of a Monte Carlo study of the system, and experimentally tested on the G-PISA RL oriented with the normal to the ring plane almost parallel to the Earth's rotation axis. In this configuration the Earth's rotation rate provides the maximum Sagnac effect while the contribution of the orientation error is reduced to a minimum. After the subtraction of laser dynamics by a Kalman filter, the relative systematic errors of G-PISA reduce from 50 to 5 parts in 103 and can be attributed to the residual uncertainties on geometrical scale factor and orientation of the ring.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Control system estimation and design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.
1972-01-01
The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones
NASA Astrophysics Data System (ADS)
Mao, X.; Gerhard, J. I.; Barry, D. A.
2005-12-01
The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.
NASA Astrophysics Data System (ADS)
Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.
2016-03-01
The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .
A New Online Calibration Method Based on Lord's Bias-Correction.
He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei
2017-09-01
Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
van Boven, Michiel; van de Kassteele, Jan; Korndewal, Marjolein J; van Dorp, Christiaan H; Kretzschmar, Mirjam; van der Klis, Fiona; de Melker, Hester E; Vossen, Ann C; van Baarle, Debbie
2017-09-01
Human cytomegalovirus (CMV) is a herpes virus with poorly understood transmission dynamics. Person-to-person transmission is thought to occur primarily through transfer of saliva or urine, but no quantitative estimates are available for the contribution of different infection routes. Using data from a large population-based serological study (n = 5,179), we provide quantitative estimates of key epidemiological parameters, including the transmissibility of primary infection, reactivation, and re-infection. Mixture models are fitted to age- and sex-specific antibody response data from the Netherlands, showing that the data can be described by a model with three distributions of antibody measurements, i.e. uninfected, infected, and infected with increased antibody concentration. Estimates of seroprevalence increase gradually with age, such that at 80 years 73% (95%CrI: 64%-78%) of females and 62% (95%CrI: 55%-68%) of males are infected, while 57% (95%CrI: 47%-67%) of females and 37% (95%CrI: 28%-46%) of males have increased antibody concentration. Merging the statistical analyses with transmission models, we find that models with infectious reactivation (i.e. reactivation that can lead to the virus being transmitted to a novel host) fit the data significantly better than models without infectious reactivation. Estimated reactivation rates increase from low values in children to 2%-4% per year in women older than 50 years. The results advance a hypothesis in which transmission from adults after infectious reactivation is a key driver of transmission. We discuss the implications for control strategies aimed at reducing CMV infection in vulnerable groups.
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Tyson, Stephen
2015-04-01
Fracture density and orientation are key parameters controlling productivity of coal seam gas reservoirs. Seismic anisotropy can help to identify and quantify fracture characteristics. In particular, wide offset and dense azimuthal coverage land seismic recordings offers the opportunity for recovery of anisotropy parameters. In many coal seam gas reservoirs (eg. Walloon Subgroup in the Surat Basin, Queensland, Australia (Esterle et al. 2013)) the thickness of coal-beds and interbeds (e.g mud-stone) are well below the seismic wave length (0.3-1m versus 5-15m). In these situations, the observed seismic anisotropy parameters represent effective elastic properties of the composite media formed of fractured, anisotropic coal and isotropic interbed. As a consequence observed seismic anisotropy cannot directly be linked to fracture characteristics but requires a more careful interpretation. In the paper we will discuss techniques to estimate effective seismic anisotropy parameters from well log data with the objective to improve the interpretation for the case of layered thin coal beds. In the first step we use sonic log data to reconstruct the elasticity parameters as function of depth (at the resolution of the sonic log). It is assumed that within a sample fractures are sparse, of the same size and orientation, penny-shaped and equally spaced. Following classical fracture model this can be modeled as an elastic horizontally transversely isotropic (HTI) media (Schoenberg & Sayers 1995). Under the additional assumption of dry fractures, normal and tangential fracture weakness is estimated from slow and fast shear wave velocities of the sonic log. In the second step we apply Backus-style upscaling to construct effective anisotropy parameters on an appropriate length scale. In order to honor the HTI anisotropy present at each layer we have developed a new extension of the classical Backus averaging for layered isotropic media (Backus 1962) . Our new method assumes layered HTI media with constant anisotropy orientation as recovered in the first step. It leads to an effective horizontal orthorhombic elastic model. From this model Thomsen-style anisotropy parameters are calculated to derive azimuth-dependent normal move out (NMO) velocities (see Grechka & Tsvankin 1998). In our presentation we will show results of our approach from sonic well logs in the Surat Basin to investigate the potential of reconstructing S-wave velocity anisotropy and fracture density from azimuth dependent NMO velocities profiles.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Veronica M; Bartell, Scott M
2016-04-01
We recently utilized a suite of environmental fate and transport models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations, and also assessed the association of those concentrations with preeclampsia for participants in the C8 Health Project (a cross-sectional study of over 69,000 people who were environmentally exposed to PFOA near a major U.S. fluoropolymer production facility located in West Virginia). However, the exposure estimates from this integrated model relied on default values for key independent exposure parameters including water ingestion rates, the serum PFOA half-life, and the volume of distribution for PFOA. The aim of the present study is to assess the impact of inter-individual variability and epistemic uncertainty in these parameters on the exposure estimates and subsequently, the epidemiological association between PFOA exposure and preeclampsia. We used Monte Carlo simulation to propagate inter-individual variability/epistemic uncertainty in the exposure assessment and reanalyzed the epidemiological association. Inter-individual variability in these parameters mildly impacted the serum PFOA concentration predictions (the lowest mean rank correlation between the estimated serum concentrations in our study and the original predicted serum concentrations was 0.95) and there was a negligible impact on the epidemiological association with preeclampsia (no change in the mean adjusted odds ratio (AOR) and the contribution of exposure uncertainty to the total uncertainty including sampling variability was 7%). However, when epistemic uncertainty was added along with the inter-individual variability, serum PFOA concentration predictions and their association with preeclampsia were moderately impacted (the mean AOR of preeclampsia occurrence was reduced from 1.12 to 1.09, and the contribution of exposure uncertainty to the total uncertainty was increased up to 33%). In conclusion, our study shows that the change of the rank exposure among the study participants due to variability and epistemic uncertainty in the independent exposure parameters was large enough to cause a 25% bias towards the null. This suggests that the true AOR of the association between PFOA and preeclampsia in this population might be higher than the originally reported AOR and has more uncertainty than indicated by the originally reported confidence interval. Copyright © 2016 Elsevier Inc. All rights reserved.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Johnson, Anthea; Singhal, Naresh
2015-01-01
The contributions of mechanisms by which chelators influence metal translocation to plant shoot tissues are analyzed using a combination of numerical modelling and physical experiments. The model distinguishes between apoplastic and symplastic pathways of water and solute movement. It also includes the barrier effects of the endodermis and plasma membrane. Simulations are used to assess transport pathways for free and chelated metals, identifying mechanisms involved in chelate-enhanced phytoextraction. Hypothesized transport mechanisms and parameters specific to amendment treatments are estimated, with simulated results compared to experimental data. Parameter values for each amendment treatment are estimated based on literature and experimental values, and used for model calibration and simulation of amendment influences on solute transport pathways and mechanisms. Modeling indicates that chelation alters the pathways for Cu transport. For free ions, Cu transport to leaf tissue can be described using purely apoplastic or transcellular pathways. For strong chelators (ethylenediaminetetraacetic acid (EDTA) and diethylenetriaminepentaacetic acid (DTPA)), transport by the purely apoplastic pathway is insufficient to represent measured Cu transport to leaf tissue. Consistent with experimental observations, increased membrane permeability is required for simulating translocation in EDTA and DTPA treatments. Increasing the membrane permeability is key to enhancing phytoextraction efficiency. PMID:26512647
Estimation of energy density of Li-S batteries with liquid and solid electrolytes
NASA Astrophysics Data System (ADS)
Li, Chunmei; Zhang, Heng; Otaegui, Laida; Singh, Gurpreet; Armand, Michel; Rodriguez-Martinez, Lide M.
2016-09-01
With the exponential growth of technology in mobile devices and the rapid expansion of electric vehicles into the market, it appears that the energy density of the state-of-the-art Li-ion batteries (LIBs) cannot satisfy the practical requirements. Sulfur has been one of the best cathode material choices due to its high charge storage (1675 mAh g-1), natural abundance and easy accessibility. In this paper, calculations are performed for different cell design parameters such as the active material loading, the amount/thickness of electrolyte, the sulfur utilization, etc. to predict the energy density of Li-S cells based on liquid, polymeric and ceramic electrolytes. It demonstrates that Li-S battery is most likely to be competitive in gravimetric energy density, but not volumetric energy density, with current technology, when comparing with LIBs. Furthermore, the cells with polymer and thin ceramic electrolytes show promising potential in terms of high gravimetric energy density, especially the cells with the polymer electrolyte. This estimation study of Li-S energy density can be used as a good guidance for controlling the key design parameters in order to get desirable energy density at cell-level.
Model based estimation of sediment erosion in groyne fields along the River Elbe
NASA Astrophysics Data System (ADS)
Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard
2008-11-01
River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.